In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...
Occurrence of Surface Water Contaminations: An Overview
NASA Astrophysics Data System (ADS)
Shahabudin, M. M.; Musa, S.
2018-04-01
Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
Rounds, Stewart A.
2007-01-01
Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate to within about 0.005 degrees Celsius (?C). In addition to assessing the effects of point-source heat trades, the models were used to evaluate the temperature effects of several shade-restoration scenarios. Restoration of riparian shade along the entire Long Tom River, from its mouth to Fern Ridge Dam, was calculated to have a small but significant effect on daily maximum temperatures in the main-stem Willamette River, on the order of 0.03?C where the Long Tom River enters the Willamette River, and diminishing downstream. Model scenarios also were run to assess the effects of restoring selected 5-mile reaches of riparian vegetation along the main-stem Willamette River from river mile (RM) 176.80, just upstream of the point where the McKenzie River joins the Willamette River, to RM 116.87 near Albany, which is one location where cumulative point-source heating effects are at a maximum. Restoration of riparian vegetation along the main-stem Willamette River was shown by model runs to have a significant local effect on daily maximum river temperatures (0.046 to 0.194?C) at the site of restoration. The magnitude of the cooling depends on many factors including river width, flow, time of year, and the difference in vegetation characteristics between current and restored conditions. Downstream of the restored reach, the cooling effects are complex and have a nodal nature: at one-half day of travel time downstream, shade restoration has little effect on daily maximum temperature because water passes the restoration site at night; at 1 full day of travel time downstream, cooling effects increase to a second, diminished maximum. Such spatial complexities may complicate the trading of heat allocations between point and nonpoint sources. Upstream dams have an important effect on water temperature in the Willamette River system as a result of augmented flows as well as modified temperature releases over the course of the summer and autumn. The TMDL was formulated prior t
Ouwehand, Kim; van Gog, Tamara; Paas, Fred
2016-10-01
Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.
Differentiating Impacts of Watershed Development from Superfund Sites on Stream Macroinvertebrates
Urbanization effect models were developed and verified at whole watershed scales to predict and differentiate between effects on aquatic life from diffuse, non-point source (NPS) urbanization in the watershed and effects of known local, site-specific origin point sources, contami...
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
In order to effectively control inputs of contamination to coastal recreational waters, an improved understanding of the impact of both point and non-point sources of urban runoff is needed. In this study, we focused on the effect of non-point source urban runoff on the enterococ...
Using EMAP data from the NE Wadeable Stream Survey and state datasets (CT, ME), assessment tools were developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Classification schemes were...
Assessment tools are being developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Using EMAP data from the New England Wadeable Stream Survey and two state datasets (CT, ME), we are de...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobulnicky, Henry A.; Alexander, Michael J.; Babler, Brian L.
We characterize the completeness of point source lists from Spitzer Space Telescope surveys in the four Infrared Array Camera (IRAC) bandpasses, emphasizing the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) programs (GLIMPSE I, II, 3D, 360; Deep GLIMPSE) and their resulting point source Catalogs and Archives. The analysis separately addresses effects of incompleteness resulting from high diffuse background emission and incompleteness resulting from point source confusion (i.e., crowding). An artificial star addition and extraction analysis demonstrates that completeness is strongly dependent on local background brightness and structure, with high-surface-brightness regions suffering up to five magnitudes of reduced sensitivity to pointmore » sources. This effect is most pronounced at the IRAC 5.8 and 8.0 {mu}m bands where UV-excited polycyclic aromatic hydrocarbon emission produces bright, complex structures (photodissociation regions). With regard to diffuse background effects, we provide the completeness as a function of stellar magnitude and diffuse background level in graphical and tabular formats. These data are suitable for estimating completeness in the low-source-density limit in any of the four IRAC bands in GLIMPSE Catalogs and Archives and some other Spitzer IRAC programs that employ similar observational strategies and are processed by the GLIMPSE pipeline. By performing the same analysis on smoothed images we show that the point source incompleteness is primarily a consequence of structure in the diffuse background emission rather than photon noise. With regard to source confusion in the high-source-density regions of the Galactic Plane, we provide figures illustrating the 90% completeness levels as a function of point source density at each band. We caution that completeness of the GLIMPSE 360/Deep GLIMPSE Catalogs is suppressed relative to the corresponding Archives as a consequence of rejecting stars that lie in the point-spread function wings of saturated sources. This effect is minor in regions of low saturated star density, such as toward the Outer Galaxy; this effect is significant along sightlines having a high density of saturated sources, especially for Deep GLIMPSE and other programs observing closer to the Galactic center using 12 s or longer exposure times.« less
Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian
2013-02-01
Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-02-01
The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S
2018-02-01
Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.
NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
Illusion induced overlapped optics.
Zang, XiaoFei; Shi, Cheng; Li, Zhou; Chen, Lin; Cai, Bin; Zhu, YiMing; Zhu, HaiBin
2014-01-13
The traditional transformation-based cloak seems like it can only hide objects by bending the incident electromagnetic waves around the hidden region. In this paper, we prove that invisible cloaks can be applied to realize the overlapped optics. No matter how many in-phase point sources are located in the hidden region, all of them can overlap each other (this can be considered as illusion effect), leading to the perfect optical interference effect. In addition, a singular parameter-independent cloak is also designed to obtain quasi-overlapped optics. Even more amazing of overlapped optics is that if N identical separated in-phase point sources covered with the illusion media, the total power outside the transformation region is N2I0 (not NI0) (I0 is the power of just one point source, and N is the number point sources), which seems violating the law of conservation of energy. A theoretical model based on interference effect is proposed to interpret the total power of these two kinds of overlapped optics effects. Our investigation may have wide applications in high power coherent laser beams, and multiple laser diodes, and so on.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
MANAGING MICROBIAL CONTAMINATION IN URBAN WATERSHEDS
This paper presents different approaches for controlling pathogen contamination in urban watersheds for contamination resulting from point and diffuse sources. Point sources of pathogens can be treated by a disinfection technology of known effectiveness, and a desired reduction ...
MANAGING MICROBIAL CONTAMINATION IN URBAN WATERSHEDS
This paper presents different approaches for controlling pathogen contamination in urban watersheds for contamination resulting from point and diffuses sources. Point sources of pathogens can be treated by a disinfection technology of known effectiveness, and a desired reduction ...
Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-01-01
Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061
Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin
2013-03-01
One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.
Woodchip bioreactors effectively treat aquaculture effluent
USDA-ARS?s Scientific Manuscript database
Nutrients, in particular nitrogen and phosphorus, can create eutrophication problems in any watershed. Preventing water quality impairment requires controlling nutrients from both point-source and non-point source discharges. Woodchip bioreactors are one relatively new approach that can be utilized ...
NASA Astrophysics Data System (ADS)
Petr, Rodney; Bykanov, Alexander; Freshman, Jay; Reilly, Dennis; Mangano, Joseph; Roche, Maureen; Dickenson, Jason; Burte, Mitchell; Heaton, John
2004-08-01
A high average power dense plasma focus (DPF), x-ray point source has been used to produce ˜70 nm line features in AlGaAs-based monolithic millimeter-wave integrated circuits (MMICs). The DPF source has produced up to 12 J per pulse of x-ray energy into 4π steradians at ˜1 keV effective wavelength in ˜2 Torr neon at pulse repetition rates up to 60 Hz, with an effective x-ray yield efficiency of ˜0.8%. Plasma temperature and electron concentration are estimated from the x-ray spectrum to be ˜170 eV and ˜5.1019 cm-3, respectively. The x-ray point source utilizes solid-state pulse power technology to extend the operating lifetime of electrodes and insulators in the DPF discharge. By eliminating current reversals in the DPF head, an anode electrode has demonstrated a lifetime of more than 5 million shots. The x-ray point source has also been operated continuously for 8 h run times at 27 Hz average pulse recurrent frequency. Measurements of shock waves produced by the plasma discharge indicate that overpressure pulses must be attenuated before a collimator can be integrated with the DPF point source.
Multiband super-resolution imaging of graded-index photonic crystal flat lens
NASA Astrophysics Data System (ADS)
Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun
2018-05-01
Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.
Vieno, M; Dore, A J; Bealey, W J; Stevenson, D S; Sutton, M A
2010-01-15
An atmospheric transport-chemistry model is applied to investigate the effects of source configuration in simulating regional sulphur deposition footprints from elevated point sources. Dry and wet depositions of sulphur are calculated for each of the 69 largest point sources in the UK. Deposition contributions for each point source are calculated for 2003, as well as for a 2010 emissions scenario. The 2010 emissions scenario has been chosen to simulate the Gothenburg protocol emission scenario. Point source location is found to be a major driver of the dry/wet deposition ratio for each deposition footprint, with increased precipitation scavenging of SO(x) in hill areas resulting in a larger fraction of the emitted sulphur being deposited within the UK for sources located near these areas. This reduces exported transboundary pollution, but, associated with the occurrence of sensitive soils in hill areas, increases the domestic threat of soil acidification. The simulation of plume rise using individual stack parameters for each point source demonstrates a high sensitivity of SO(2) surface concentration to effective source height. This emphasises the importance of using site-specific information for each major stack, which is rarely included in regional atmospheric pollution models, due to the difficulty in obtaining the required input data. The simulations quantify how the fraction of emitted SO(x) exported from the UK increases with source magnitude, effective source height and easterly location. The modelled reduction in SO(x) emissions, between 2003 and 2010 resulted in a smaller fraction being exported, with the result that the reductions in SO(x) deposition to the UK are less than proportionate to the emission reduction. This non-linearity is associated with a relatively larger fraction of the SO(2) being converted to sulphate aerosol for the 2010 scenario, in the presence of ammonia. The effect results in less-than-proportional UK benefits of reducing in SO(2) emissions, together with greater-than-proportional benefits in reducing export of UK SO(2) emissions. Copyright 2009 Elsevier B.V. All rights reserved.
Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve
2018-04-30
Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a 'volcano effect' (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium's temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium's chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.
NASA Technical Reports Server (NTRS)
Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)
2017-01-01
A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.
- Many of the nation's rivers, lakes, and estuaries are impaired with fecal indicator bacteria. - Fecal contamination from point and non-point sources is responsible for the presence of fecal pathogens in source and recreational waters - Effective compliance with TMDL regulatio...
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Point and Compact Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.
2017-12-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.
Stamer, J.K.; Cherry, R.N.; Faye, R.E.; Kleckner, R.L.
1978-01-01
On an average annual basis and during the storm period of March 12-15, 1976, nonpoint-source loads for most constituents were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 miles downstream from Atlanta, GA. Most of the nonpoint-source constituent loads in the Atlanta to Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads and about 70 percent of the dissolved phosphorus loads at Whitesburg. During a low-flow period, June 1-2, 1977, five municipal point-sources contributed 63 percent of the ultimate biochemical oxygen demand, and 97 percent of the ammonium nitrogen loads at the Franklin station, at the upstream end of West Point Lake. Dissolved-oxygen concentrations of 4.1 to 5.0 milligrams per liter occurred in a 22-mile reach of the river downstream from Atlanta due about equally to nitrogenous and carbonaceous oxygen demands. The heat load from two thermoelectric powerplants caused a decrease in dissolved-oxygen concentration of about 0.2 milligrams per liter. Phytoplankton concentrations in West Point Lake, about 70 miles downstream from Atlanta, could exceed three million cells per millimeter during extended low-flow periods in the summer with present point-source phosphorus loads. (Woodard-USGS)
A spatial and seasonal assessment of river water chemistry across North West England.
Rothwell, J J; Dise, N B; Taylor, K G; Allott, T E H; Scholefield, P; Davies, H; Neal, C
2010-01-15
This paper presents information on the spatial and seasonal patterns of river water chemistry at approximately 800 sites in North West England based on data from the Environment Agency regional monitoring programme. Within a GIS framework, the linkages between average water chemistry (pH, sulphate, base cations, nutrients and metals) catchment characteristics (topography, land cover, soil hydrology, base flow index and geology), rainfall, deposition chemistry and geo-spatial information on discharge consents (point sources) are examined. Water quality maps reveal that there is a clear distinction between the uplands and lowlands. Upland waters are acidic and have low concentrations of base cations, explained by background geological sources and land cover. Localised high concentrations of metals occur in areas of the Cumbrian Fells which are subjected to mining effluent inputs. Nutrient concentrations are low in the uplands with the exception sites receiving effluent inputs from rural point sources. In the lowlands, both past and present human activities have a major impact on river water chemistry, especially in the urban and industrial heartlands of Greater Manchester, south Lancashire and Merseyside. Over 40% of the sites have average orthophosphate concentrations >0.1mg-Pl(-1). Results suggest that the dominant control on orthophosphate concentrations is point source contributions from sewage effluent inputs. Diffuse agricultural sources are also important, although this influence is masked by the impact of point sources. Average nitrate concentrations are linked to the coverage of arable land, although sewage effluent inputs have a significant effect on nitrate concentrations. Metal concentrations in the lowlands are linked to diffuse and point sources. The study demonstrates that point sources, as well as diffuse sources, need to be considered when targeting measures for the effective reduction in river nutrient concentrations. This issue is clearly important with regards to the European Union Water Framework Directive, eutrophication and river water quality. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Chandra ACIS Sub-pixel Resolution
NASA Astrophysics Data System (ADS)
Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.
2011-05-01
We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy
NASA Astrophysics Data System (ADS)
Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve
2018-07-01
Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a ‘volcano effect’ (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium’s temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium’s chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
Generic effective source for scalar self-force calculations
NASA Astrophysics Data System (ADS)
Wardell, Barry; Vega, Ian; Thornburg, Jonathan; Diener, Peter
2012-05-01
A leading approach to the modeling of extreme mass ratio inspirals involves the treatment of the smaller mass as a point particle and the computation of a regularized self-force acting on that particle. In turn, this computation requires knowledge of the regularized retarded field generated by the particle. A direct calculation of this regularized field may be achieved by replacing the point particle with an effective source and solving directly a wave equation for the regularized field. This has the advantage that all quantities are finite and require no further regularization. In this work, we present a method for computing an effective source which is finite and continuous everywhere, and which is valid for a scalar point particle in arbitrary geodesic motion in an arbitrary background spacetime. We explain in detail various technical and practical considerations that underlie its use in several numerical self-force calculations. We consider as examples the cases of a particle in a circular orbit about Schwarzschild and Kerr black holes, and also the case of a particle following a generic timelike geodesic about a highly spinning Kerr black hole. We provide numerical C code for computing an effective source for various orbital configurations about Schwarzschild and Kerr black holes.
Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua
2013-02-01
Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.
A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas
White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.
1992-01-01
More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.
Yi, Qitao; Chen, Qiuwen; Hu, Liuming; Shi, Wenqing
2017-05-16
This research developed an innovative approach to reveal nitrogen sources, transformation, and transport in large and complex river networks in the Taihu Lake basin using measurement of dual stable isotopes of nitrate. The spatial patterns of δ 15 N corresponded to the urbanization level, and the nitrogen cycle was associated with the hydrological regime at the basin level. During the high flow season of summer, nonpoint sources from fertilizer/soils and atmospheric deposition constituted the highest proportion of the total nitrogen load. The point sources from sewage/manure, with high ammonium concentrations and high δ 15 N and δ 18 O contents in the form of nitrate, accounted for the largest inputs among all sources during the low flow season of winter. Hot spot areas with heavy point source pollution were identified, and the pollutant transport routes were revealed. Nitrification occurred widely during the warm seasons, with decreased δ 18 O values; whereas great potential for denitrification existed during the low flow seasons of autumn and spring. The study showed that point source reduction could have effects over the short-term; however, long-term efforts to substantially control agriculture nonpoint sources are essential to eutrophication alleviation for the receiving lake, which clarifies the relationship between point and nonpoint source control.
The Unicellular State as a Point Source in a Quantum Biological System
Torday, John S.; Miller, William B.
2016-01-01
A point source is the central and most important point or place for any group of cohering phenomena. Evolutionary development presumes that biological processes are sequentially linked, but neither directed from, nor centralized within, any specific biologic structure or stage. However, such an epigenomic entity exists and its transforming effects can be understood through the obligatory recapitulation of all eukaryotic lifeforms through a zygotic unicellular phase. This requisite biological conjunction can now be properly assessed as the focal point of reconciliation between biology and quantum phenomena, illustrated by deconvoluting complex physiologic traits back to their unicellular origins. PMID:27240413
NASA Astrophysics Data System (ADS)
Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.
1994-02-01
A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.
Chen, Li-ding; Peng, Hong-jia; Fu, Bo-Jie; Qiu, Jun; Zhang, Shu-rong
2005-01-01
Surface waters can be contaminated by human activities in two ways: (1) by point sources, such as sewage treatment discharge and storm-water runoff; and (2) by non-point sources, such as runoff from urban and agricultural areas. With point-source pollution effectively controlled, non-point source pollution has become the most important environmental concern in the world. The formation of non-point source pollution is related to both the sources such as soil nutrient, the amount of fertilizer and pesticide applied, the amount of refuse, and the spatial complex combination of land uses within a heterogeneous landscape. Land-use change, dominated by human activities, has a significant impact on water resources and quality. In this study, fifteen surface water monitoring points in the Yuqiao Reservoir Basin, Zunhua, Hebei Province, northern China, were chosen to study the seasonal variation of nitrogen concentration in the surface water. Water samples were collected in low-flow period (June), high-flow period (July) and mean-flow period (October) from 1999 to 2000. The results indicated that the seasonal variation of nitrogen concentration in the surface water among the fifteen monitoring points in the rainfall-rich year is more complex than that in the rainfall-deficit year. It was found that the land use, the characteristics of the surface river system, rainfall, and human activities play an important role in the seasonal variation of N-concentration in surface water.
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Monitor-based evaluation of pollutant load from urban stormwater runoff in Beijing.
Liu, Y; Che, W; Li, J
2005-01-01
As a major pollutant source to urban receiving waters, the non-point source pollution from urban runoff needs to be well studied and effectively controlled. Based on monitoring data from urban runoff pollutant sources, this article describes a systematic estimation of total pollutant loads from the urban areas of Beijing. A numerical model was developed to quantify main pollutant loads of urban runoff in Beijing. A sub-procedure is involved in this method, in which the flush process influences both the quantity and quality of stormwater runoff. A statistics-based method was applied in computing the annual pollutant load as an output of the runoff. The proportions of pollutant from point-source and non-point sources were compared. This provides a scientific basis for proper environmental input assessment of urban stormwater pollution to receiving waters, improvement of infrastructure performance, implementation of urban stormwater management, and utilization of stormwater.
NONPOINT SOURCES AND WATER QUALITY TRADING
Management of nonpoint sources (NPS) of nutrients may reduce discharge levels more cost effectively than can additional controls on point sources (PS); water quality trading (WQT), where a PS buys nutrient or sediment reductions from an NPS, may be an alternative means for the PS...
The effects of correlated noise in phased-array observations of radio sources
NASA Technical Reports Server (NTRS)
Dewey, Rachel J.
1994-01-01
Arrays of radio telescopes are now routinely used to provide increased signal-to-noise when observing faint point sources. However, calculation of the achievable sensitivity is complicated if there are sources in the field of view other than the target source. These additional sources not only increase the system temperatures of the individual antennas, but may also contribute significant 'correlated noise' to the effective system temperature of the array. This problem has been of particular interest in the context of tracking spacecraft in the vicinity of radio-bright planets (e.g., Galileo at Jupiter), but it has broader astronomical relevance as well. This paper presents a general formulation of the problem, for the case of a point-like target source in the presence of an additional radio source of arbitrary brightness distribution. We re-derive the well known result that, in the absence of any background sources, a phased array of N indentical antennas is a factor of N more sensitive than a single antenna. We also show that an unphased array of N identical antennas is, on average, no more sensitive than a single antenna if the signals from the individual antennas are combined prior to detection. In the case where a background source is present we show that the effects of correlated noise are highly geometry dependent, and for some astronomical observations may cause significant fluctuations in the array's effective system temperature.
Investigating the generation of Love waves in secondary microseisms using 3D numerical simulations
NASA Astrophysics Data System (ADS)
Wenk, Stefan; Hadziioannou, Celine; Pelties, Christian; Igel, Heiner
2014-05-01
Longuet-Higgins (1950) proposed that secondary microseismic noise can be attributed to oceanic disturbances by surface gravity wave interference causing non-linear, second-order pressure perturbations at the ocean bottom. As a first approximation, this source mechanism can be considered as a force acting normal to the ocean bottom. In an isotropic, layered, elastic Earth model with plain interfaces, vertical forces generate P-SV motions in the vertical plane of source and receiver. In turn, only Rayleigh waves are excited at the free surface. However, several authors report on significant Love wave contributions in the secondary microseismic frequency band of real data measurements. The reason is still insufficiently analysed and several hypothesis are under debate: - The source mechanism has strongest influence on the excitation of shear motions, whereas the source direction dominates the effect of Love wave generation in case of point force sources. Darbyshire and Okeke (1969) proposed the topographic coupling effect of pressure loads acting on a sloping sea-floor to generate the shear tractions required for Love wave excitation. - Rayleigh waves can be converted into Love waves by scattering. Therefore, geometric scattering at topographic features or internal scattering by heterogeneous material distributions can cause Love wave generation. - Oceanic disturbances act on large regions of the ocean bottom, and extended sources have to be considered. In combination with topographic coupling and internal scattering, the extent of the source region and the timing of an extended source should effect Love wave excitation. We try to elaborate the contribution of different source mechanisms and scattering effects on Love to Rayleigh wave energy ratios by 3D numerical simulations. In particular, we estimate the amount of Love wave energy generated by point and extended sources acting on the free surface. Simulated point forces are modified in their incident angle, whereas extended sources are adapted in their spatial extent, magnitude and timing. Further, the effect of variations in the correlation length and perturbation magnitude of a random free surface topography as well as an internal random material distribution are studied.
Floating-point scaling technique for sources separation automatic gain control
NASA Astrophysics Data System (ADS)
Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.
2012-07-01
Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.
BEAMLINE-CONTROLLED STEERING OF SOURCE-POINT ANGLE AT THE ADVANCED PHOTON SOURCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emery, L.; Fystro, G.; Shang, H.
An EPICS-based steering software system has been implemented for beamline personnel to directly steer the angle of the synchrotron radiation sources at the Advanced Photon Source. A script running on a workstation monitors "start steering" beamline EPICS records, and effects a steering given by the value of the "angle request" EPICS record. The new system makes the steering process much faster than before, although the older steering protocols can still be used. The robustness features of the original steering remain. Feedback messages are provided to the beamlines and the accelerator operators. Underpinning this new steering protocol is the recent refinementmore » of the global orbit feedback process whereby feedforward of dipole corrector set points and orbit set points are used to create a local steering bump in a rapid and seamless way.« less
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
S. Scesa; F. M. Sauer
1954-01-01
The transfer theory is applied to the problem of atmospheric diffusion of momentum and heat induced by line and point sources of heat on the surface of the earth. In order that the validity of the approximations of the boundary layer theory be realized, the thickness of the layer in which the temperatures and velocities differ appreciably from the values at...
Study on road surface source pollution controlled by permeable pavement
NASA Astrophysics Data System (ADS)
Zheng, Chaocheng
2018-06-01
The increase of impermeable pavement in urban construction not only increases the runoff of the pavement, but also produces a large number of Non-Point Source Pollution. In the process of controlling road surface runoff by permeable pavement, a large number of particulate matter will be withheld when rainwater is being infiltrated, so as to control the source pollution at the source. In this experiment, we determined the effect of permeable road surface to remove heavy pollutants in the laboratory and discussed the related factors that affect the non-point pollution of permeable pavement, so as to provide a theoretical basis for the application of permeable pavement.
A Peltier-based variable temperature source
NASA Astrophysics Data System (ADS)
Molki, Arman; Roof Baba, Abdul
2014-11-01
In this paper we propose a simple and cost-effective variable temperature source based on the Peltier effect using a commercially purchased thermoelectric cooler. The proposed setup can be used to quickly establish relatively accurate dry temperature reference points, which are necessary for many temperature applications such as thermocouple calibration.
Stamer, J.K.; Cherry, Rodney N.; Faye, R.E.; Kleckner, R.L.
1979-01-01
During the period April 1975 to June 1978, the U.S. Geological Survey conducted a river-quality assessment of the Upper Chattahoochee River basin in Georgia. One objective of the study was to assess the magnitudes, nature, and effects of point and non-point discharges in the Chattahoochee River basin from Atlanta to the West Point Dam. On an average annual basis and during the storm period of March 1215, 1976, non-point-source loads for most constituents analyzed were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 river miles downstream of Atlanta. Most of the non-point-source constituent loads in the Atlanta-to-Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads, and about 70 percent of the dissolved phosphorus loads at Whitesburg. During weekends, power generation at the upstream Buford Dam hydroelectric facility is minimal. Streamflow at the Atlanta station during dry-weather weekends is estimated to be about 1,200 ft3/s (cubic feet per second). Average daily dissolved-oxygen concentrations of less than 5.0 mg/L (milligrams per liter) occurred often in the river, about 20 river miles downstream from Atlanta during these periods from May to November. During a low-flow period, June 1-2, 1977, five municipal point sources contributed 63 percent of the ultimate biochemical oxygen demand, 97 percent of the ammonium nitrogen, 78 percent of the total nitrogen, and 90 percent of the total phosphorus loads at the Franklin station, at the upstream end of West Point Lake. Average daily concentrations of 13 mg/L of ultimate biochemical oxygen demand and 1.8 mg/L of ammonium nitrogen were observed about 2 river miles downstream from two of the municipal point sources. Carbonaceous and nitrogenous oxygen demands caused dissolved-oxygen concentrations between 4.1 and 5.0 mg/L to occur in a 22-mile reach of the river downstream from Atlanta. Nitrogenous oxygen demands were greater than carbonaceous oxygen demands in the reach from river mile 303 to 271, and carbonaceous demands were greater from river mile 271 to 235. The heat load from the Atkinson-McDonough thermoelectric power-plants caused a decrease in the dissolved-oxygen concentrations of about 0.2 mg/L. During a critical low-flow period, a streamflow at Atlanta of about 1,800 ft3/s, with present (1977) point-source flows of 185 ft3/s containing concentrations of 45 mg/L of ultimate biochemical oxygen demand and 15 mg/L of ammonium nitrogen, results in a computed minimum dissolved-oxygen concentration of 4.7 mg/L in the river downstream from Atlanta. In the year 2000, a streamflow at Atlanta of about 1,800 ft3/s with point-source flows of 373 ft3/s containing concentrations of 45 mg/L of ultimate biochemical oxygen demand and 5.0 mg/L of ammonium nitrogen, will result in a computed minimum dissolved-oxygen concentration of 5.0 mg/L. A streamflow of about 1,050 ft3/s at Atlanta in the year 2000 will result in a dissolved-oxygen concentration of 5.0 mg/L if point-source flows contain concentrations of 15 mg/L of ultimate biochemical oxygen demand and 5.0 mg/L of ammonium nitrogen. Phytoplankton concentrations in West Point Lake, about 70 river miles downstream from Atlanta, could exceed 3 million cells per milliliter during extended low-flow periods in the summer with present point- and non-point-source nitrogen and phosphorus loads. In the year 2000, phytoplankton concentrations in West Point Lake are not likely to exceed 700,000 cells per milliliter during extended low-flow periods in the summer, if phosphorus concentrations do not exceed 1.0 mg/L in point-source discharges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Coddington, Michael H.; Brown, David
Voltage regulators perform as desired when regulating from the source to the load and when regulating from a strong source (utility) to a weak source (distributed generation). (See the glossary for definitions of a strong source and weak source.) Even when the control is provisioned for reverse operation, it has been observed that tap-changing voltage regulators do not perform as desired in reverse when attempting regulation from the weak source to the strong source. The region of performance that is not as well understood is the regulation between sources that are approaching equal strength. As part of this study, wemore » explored all three scenarios: regulator control from a strong source to a weak source (classic case), control from a weak source to a strong source (during reverse power flow), and control between equivalent sources.« less
Outdoor air pollution in close proximity to a continuous point source
NASA Astrophysics Data System (ADS)
Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul
Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.
40 CFR 125.64 - Effect of the discharge on other point and nonpoint sources.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 23 2012-07-01 2012-07-01 false Effect of the discharge on other point... (CONTINUED) WATER PROGRAMS CRITERIA AND STANDARDS FOR THE NATIONAL POLLUTANT DISCHARGE ELIMINATION SYSTEM Criteria for Modifying the Secondary Treatment Requirements Under Section 301(h) of the Clean Water Act...
40 CFR 125.64 - Effect of the discharge on other point and nonpoint sources.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 21 2010-07-01 2010-07-01 false Effect of the discharge on other point... (CONTINUED) WATER PROGRAMS CRITERIA AND STANDARDS FOR THE NATIONAL POLLUTANT DISCHARGE ELIMINATION SYSTEM Criteria for Modifying the Secondary Treatment Requirements Under Section 301(h) of the Clean Water Act...
40 CFR 125.64 - Effect of the discharge on other point and nonpoint sources.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 22 2014-07-01 2013-07-01 true Effect of the discharge on other point... (CONTINUED) WATER PROGRAMS CRITERIA AND STANDARDS FOR THE NATIONAL POLLUTANT DISCHARGE ELIMINATION SYSTEM Criteria for Modifying the Secondary Treatment Requirements Under Section 301(h) of the Clean Water Act...
40 CFR 125.64 - Effect of the discharge on other point and nonpoint sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 22 2011-07-01 2011-07-01 false Effect of the discharge on other point... (CONTINUED) WATER PROGRAMS CRITERIA AND STANDARDS FOR THE NATIONAL POLLUTANT DISCHARGE ELIMINATION SYSTEM Criteria for Modifying the Secondary Treatment Requirements Under Section 301(h) of the Clean Water Act...
40 CFR 125.64 - Effect of the discharge on other point and nonpoint sources.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 23 2013-07-01 2013-07-01 false Effect of the discharge on other point... (CONTINUED) WATER PROGRAMS CRITERIA AND STANDARDS FOR THE NATIONAL POLLUTANT DISCHARGE ELIMINATION SYSTEM Criteria for Modifying the Secondary Treatment Requirements Under Section 301(h) of the Clean Water Act...
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Pollution loads in urban runoff and sanitary wastewater.
Taebi, Amir; Droste, Ronald L
2004-07-05
While more attention has been paid in recent years to urban point source pollution control through the establishment of wastewater treatment plants in many developing countries, no considerable planning nor any serious measures have been taken to control urban non-point source pollution (urban stormwater runoff). The present study is a screening analysis to investigate the pollution loads in urban runoff compared to point source loads as a first prerequisite for planning and management of receiving water quality. To compare pollutant loads from point and non-point urban sources, the pollutant load is expressed as the weight of pollutant per hectare area per year (kg/ha.year). Unit loads were estimated in stormwater runoff, raw sanitary wastewater and secondary treatment effluents in Isfahan, Iran. Results indicate that the annual pollution load in urban runoff is lower than the annual pollution load in sanitary wastewater in areas with low precipitation but it is higher in areas with high precipitation. Two options, namely, advanced treatment (in lieu of secondary treatment) of sanitary wastewater and urban runoff quality control systems (such as detention ponds) were investigated as controlling systems for pollution discharges into receiving waters. The results revealed that for Isfahan, as a low precipitation urban area, advanced treatment is a more suitable option, but for high precipitation urban areas, urban surface runoff quality control installations were more effective for suspended solids and oxygen-demanding matter controls, and that advanced treatment is the more effective option for nutrient control.
Selective Listening Point Audio Based on Blind Signal Separation and Stereophonic Technology
NASA Astrophysics Data System (ADS)
Niwa, Kenta; Nishino, Takanori; Takeda, Kazuya
A sound field reproduction method is proposed that uses blind source separation and a head-related transfer function. In the proposed system, multichannel acoustic signals captured at distant microphones are decomposed to a set of location/signal pairs of virtual sound sources based on frequency-domain independent component analysis. After estimating the locations and the signals of the virtual sources by convolving the controlled acoustic transfer functions with each signal, the spatial sound is constructed at the selected point. In experiments, a sound field made by six sound sources is captured using 48 distant microphones and decomposed into sets of virtual sound sources. Since subjective evaluation shows no significant difference between natural and reconstructed sound when six virtual sources and are used, the effectiveness of the decomposing algorithm as well as the virtual source representation are confirmed.
Tracy, J.C.; Bernknopf, R.; Forney, W.; Hill, K.
2004-01-01
The Federal Clean Water Act (Section 303(d)) mandates that states develop Total Maximum Daily Load (TMDL) plans for water bodies that are on the Section 303(d) list. To be placed on the 303(d) list, a water body must be found to have water quality conditions that limit its ability to meet its designated beneficial uses. The TMDL for a water body is defined in 40 CFR 130 as the sum of waste load allocations from identified points sources and non-point sources within the water body's watershed. The TMDL plan for a listed water body should identify the current waste loads to the water body, the waste load capacity of the water body and then allocate the waste load capacity to the known point and non-point sources of pollution within the water body's watershed. Copyright 2004 ASCE.
2011 Radioactive Materials Usage Survey for Unmonitored Point Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturgeon, Richard W.
This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less
Test method for telescopes using a point source at a finite distance
NASA Technical Reports Server (NTRS)
Griner, D. B.; Zissa, D. E.; Korsch, D.
1985-01-01
A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.
Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard
2010-02-01
The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report presents the economic analysis of final effluent limitation guidelines, New Source Performance Standards, and pretreatment standards being promulgated for the steam-electric power plant point source category. It describes the costs of the final regulations, assesses the effects of these costs on the electric utility industry, and examines the cost-effectiveness of the regulations.
Search for point sources of high energy neutrinos with Amanda
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahrens, J.
2002-08-01
Report of search for likely point sources for neutrinos observed by the Amanda detector. Places intensity limits on observable point sources. This paper describes the search for astronomical sources of high-energy neutrinos using the AMANDA-B10 detector, an array of 302 photomultiplier tubes, used for the detection of Cherenkov light from upward traveling neutrino-induced muons, buried deep in ice at the South Pole. The absolute pointing accuracy and angular resolution were studied by using coincident events between the AMANDA detector and two independent telescopes on the surface, the GASP air Cherenkov telescope and the SPASE extensive air shower array. Using datamore » collected from April to October of 1997 (130.1 days of livetime), a general survey of the northern hemisphere revealed no statistically significant excess of events from any direction. The sensitivity for a flux of muon neutrinos is based on the effective detection area for through-going muons. Averaged over the Northern sky, the effective detection area exceeds 10,000 m{sup 2} for E{sub {mu}} {approx} 10 TeV. Neutrinos generated in the atmosphere by cosmic ray interactions were used to verify the predicted performance of the detector. For a source with a differential energy spectrum proportional to E{sub {nu}}{sup -2} and declination larger than +40{sup o}, we obtain E{sup 2} (dN{sub {nu}}/dE) {le} 10{sup -6} GeV cm{sup -2} s{sup -1} for an energy threshold of 10 GeV.« less
Time course of effects of emotion on item memory and source memory for Chinese words.
Wang, Bo; Fu, Xiaolan
2011-05-01
Although many studies have investigated the effect of emotion on memory, it is unclear whether the effect of emotion extends to all aspects of an event. In addition, it is poorly understood how effects of emotion on item memory and source memory change over time. This study examined the time course of effects of emotion on item memory and source memory. Participants learned intentionally a list of neutral, positive, and negative Chinese words, which were presented twice, and then took test of free recall, followed by recognition and source memory tests, at one of eight delayed points of time. The main findings are (within the time frame of 2 weeks): (1) Negative emotion enhances free recall, whereas there is only a trend that positive emotion enhances free recall. In addition, negative and positive emotions have different points of time at which their effects on free recall reach the greatest magnitude. (2) Negative emotion reduces recognition, whereas positive emotion has no effect on recognition. (3) Neither positive nor negative emotion has any effect on source memory. The above findings indicate that effect of emotion does not necessarily extend to all aspects of an event and that valence is a critical modulating factor in effect of emotion on item memory. Furthermore, emotion does not affect the time course of item memory and source memory, at least with a time frame of 2 weeks. This study has implications for establishing the theoretical model regarding the effect of emotion on memory. Copyright © 2011 Elsevier Inc. All rights reserved.
Interferometric superlocalization of two incoherent optical point sources.
Nair, Ranjith; Tsang, Mankei
2016-02-22
A novel interferometric method - SLIVER (Super Localization by Image inVERsion interferometry) - is proposed for estimating the separation of two incoherent point sources with a mean squared error that does not deteriorate as the sources are brought closer. The essential component of the interferometer is an image inversion device that inverts the field in the transverse plane about the optical axis, assumed to pass through the centroid of the sources. The performance of the device is analyzed using the Cramér-Rao bound applied to the statistics of spatially-unresolved photon counting using photon number-resolving and on-off detectors. The analysis is supported by Monte-Carlo simulations of the maximum likelihood estimator for the source separation, demonstrating the superlocalization effect for separations well below that set by the Rayleigh criterion. Simulations indicating the robustness of SLIVER to mismatch between the optical axis and the centroid are also presented. The results are valid for any imaging system with a circularly symmetric point-spread function.
A Novel Effect of Scattered-Light Interference in Misted Mirrors
ERIC Educational Resources Information Center
Bridge, N. James
2005-01-01
Interference rings can be observed in mirrors clouded by condensation, even in diffuse lighting. The effect depends on individual droplets acting as point sources by refracting light into the mirror, so producing coherent wave-trains which are reflected and then scattered again by diffraction round the same source droplet. The secondary wave-train…
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Tong, Yindong; Bu, Xiaoge; Chen, Junyue; Zhou, Feng; Chen, Long; Liu, Maodian; Tan, Xin; Yu, Tao; Zhang, Wei; Mi, Zhaorong; Ma, Lekuan; Wang, Xuejun; Ni, Jing
2017-01-05
Based on a time-series dataset and the mass balance method, the contributions of various sources to the nutrient discharges from the Yangtze River to the East China Sea are identified. The results indicate that the nutrient concentrations vary considerably among different sections of the Yangtze River. Non-point sources are an important source of nutrients to the Yangtze River, contributing about 36% and 63% of the nitrogen and phosphorus discharged into the East China Sea, respectively. Nutrient inputs from non-point sources vary among the sections of the Yangtze River, and the contributions of non-point sources increase from upstream to downstream. Considering the rice growing patterns in the Yangtze River Basin, the synchrony of rice tillering and the wet seasons might be an important cause of the high nutrient discharge from the non-point sources. Based on our calculations, a reduction of 0.99Tg per year in total nitrogen discharges from the Yangtze River would be needed to limit the occurrences of harmful algal blooms in the East China Sea to 15 times per year. The extensive construction of sewage treatment plants in urban areas may have only a limited effect on reducing the occurrences of harmful algal blooms in the future. Copyright © 2016 Elsevier B.V. All rights reserved.
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji
2004-06-01
Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm.
Extended source effect on microlensing light curves by an Ellis wormhole
NASA Astrophysics Data System (ADS)
Tsukamoto, Naoki; Gong, Yungui
2018-04-01
We can survey an Ellis wormhole which is the simplest Morris-Thorne wormhole in our Galaxy with microlensing. The light curve of a point source microlensed by the Ellis wormhole shows approximately 4% demagnification while the total magnification of images lensed by a Schwarzschild lens is always larger than unity. We investigate an extended source effect on the light curves microlensed by the Ellis wormhole. We show that the depth of the gutter of the light curves of an extended source is smaller than the one of a point source since the magnified part of the extended source cancels the demagnified part out. We can, however, distinguish between the light curves of the extended source microlensed by the Ellis wormhole and the ones by the Schwarzschild lens in their shapes even if the size of the source is a few times larger than the size of an Einstein ring on a source plane. If the relative velocity of a star with the radius of 1 06 km at 8 kpc in the bulge of our Galaxy against an observer-lens system is smaller than 10 km /s on a source plane, we can detect microlensing of the star lensed by the Ellis wormhole with the throat radius of 1 km at 4 kpc.
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
THE CHANDRA COSMOS SURVEY. I. OVERVIEW AND POINT SOURCE CATALOG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elvis, Martin; Civano, Francesca; Aldcroft, T. L.
2009-09-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that has imaged the central 0.5 deg{sup 2} of the COSMOS field (centered at 10 {sup h}, +02 deg.) with an effective exposure of {approx}160 ks, and an outer 0.4 deg{sup 2} area with an effective exposure of {approx}80 ks. The limiting source detection depths are 1.9 x 10{sup -16} erg cm{sup -2} s{sup -1} in the soft (0.5-2 keV) band, 7.3 x 10{sup -16} erg cm{sup -2} s{sup -1} in the hard (2-10 keV) band, and 5.7 x 10{sup -16} erg cm{sup -2} s{sup -1} in themore » full (0.5-10 keV) band. Here we describe the strategy, design, and execution of the C-COSMOS survey, and present the catalog of 1761 point sources detected at a probability of being spurious of <2 x 10{sup -5} (1655 in the full, 1340 in the soft, and 1017 in the hard bands). By using a grid of 36 heavily ({approx}50%) overlapping pointing positions with the ACIS-I imager, a remarkably uniform ({+-}12%) exposure across the inner 0.5 deg{sup 2} field was obtained, leading to a sharply defined lower flux limit. The widely different point-spread functions obtained in each exposure at each point in the field required a novel source detection method, because of the overlapping tiling strategy, which is described in a companion paper. This method produced reliable sources down to a 7-12 counts, as verified by the resulting logN-logS curve, with subarcsecond positions, enabling optical and infrared identifications of virtually all sources, as reported in a second companion paper. The full catalog is described here in detail and is available online.« less
NASA Astrophysics Data System (ADS)
Huang, Lei; Ban, Jie; Han, Yu Ting; Yang, Jie; Bi, Jun
2013-04-01
This study aims to identify key environmental risk sources contributing to water eutrophication and to suggest certain risk management strategies for rural areas. The multi-angle indicators included in the risk source assessment system were non-point source pollution, deficient waste treatment, and public awareness of environmental risk, which combined psychometric paradigm methods, the contingent valuation method, and personal interviews to describe the environmental sensitivity of local residents. Total risk values of different villages near Taihu Lake were calculated in the case study, which resulted in a geographic risk map showing which village was the critical risk source of Taihu eutrophication. The increased application of phosphorus (P) and nitrogen (N), loss vulnerability of pollutant, and a lack of environmental risk awareness led to more serious non-point pollution, especially in rural China. Interesting results revealed by the quotient between the scores of objective risk sources and subjective risk sources showed what should be improved for each study village. More environmental investments, control of agricultural activities, and promotion of environmental education are critical considerations for rural environmental management. These findings are helpful for developing targeted and effective risk management strategies in rural areas.
Theory of two-point correlations of jet noise
NASA Technical Reports Server (NTRS)
Ribner, H. S.
1976-01-01
A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.
The effect of baryons in the cosmological lensing PDFs
NASA Astrophysics Data System (ADS)
Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus
2018-07-01
Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations, we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple images by a factor of 5-500, depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested to guarantee that our uncertainties are much smaller than the effects here presented.
The effect of baryons in the cosmological lensing PDFs
NASA Astrophysics Data System (ADS)
Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus
2018-05-01
Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple-images by a factor 5 - 500 depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested in order to guarantee that our uncertainties are much smaller than the effects here presented.
Goede, Simon L; Leow, Melvin Khee-Shing
2013-01-01
This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, M.; Cohen, M.O.
1975-02-01
The adjoint Monte Carlo method previously developed by MAGI has been applied to the calculation of initial radiation dose due to air secondary gamma rays and fission product gamma rays at detector points within buildings for a wide variety of problems. These provide an in-depth survey of structure shielding effects as well as many new benchmark problems for matching by simplified models. Specifically, elevated ring source results were obtained in the following areas: doses at on-and off-centerline detectors in four concrete blockhouse structures; doses at detector positions along the centerline of a high-rise structure without walls; dose mapping at basementmore » detector positions in the high-rise structure; doses at detector points within a complex concrete structure containing exterior windows and walls and interior partitions; modeling of the complex structure by replacing interior partitions by additional material at exterior walls; effects of elevation angle changes; effects on the dose of changes in fission product ambient spectra; and modeling of mutual shielding due to external structures. In addition, point source results yielding dose extremes about the ring source average were obtained. (auth)« less
[Urban non-point source pollution control by runoff retention and filtration pilot system].
Bai, Yao; Zuo, Jian-E; Gan, Li-Li; Low, Thong Soon; Miao, Heng-Feng; Ruan, Wen-Quan; Huang, Xia
2011-09-01
A runoff retention and filtration pilot system was designed and the long-term purification effect of the runoff was monitored. Runoff pollution characters in 2 typical events and treatment effect of the pilot system were analyzed. The results showed that the runoff was severely polluted. Event mean concentrations (EMCs) of SS, COD, TN and TP in the runoff were 361, 135, 7.88 and 0.62 mg/L respectively. The runoff formed by long rain presented an obvious first flush effect. The first 25% flow contributed more than 50% of the total pollutants loading of SS, TP, DTP and PO4(3-). The pilot system could reduce 100% of the non-point source pollution if the volume of the runoff was less than the retention tank. Otherwise the overflow will be purification by the filtration pilot system and the removal rates of SS, COD, TN, TP, DTP and PO4(3-) reached 97.4% , 61.8%, 22.6%, 85.1%, 72.1%, and 85.2% respectively. The system was stable and the removal rate of SS, COD, TN, and TP were 98.6%, 65.4%, 55.1% and 92.6%. The whole system could effectively remove the non-point source pollution caused by runoff.
Zhou, Xiuru; Ye, Weili; Zhang, Bing
2016-03-01
Transaction costs and uncertainty are considered to be significant obstacles in the emissions trading market, especially for including nonpoint source in water quality trading. This study develops a nonlinear programming model to simulate how uncertainty and transaction costs affect the performance of point/nonpoint source (PS/NPS) water quality trading in the Lake Tai watershed, China. The results demonstrate that PS/NPS water quality trading is a highly cost-effective instrument for emissions abatement in the Lake Tai watershed, which can save 89.33% on pollution abatement costs compared to trading only between nonpoint sources. However, uncertainty can significantly reduce the cost-effectiveness by reducing trading volume. In addition, transaction costs from bargaining and decision making raise total pollution abatement costs directly and cause the offset system to deviate from the optimal state. While proper investment in monitoring and measuring of nonpoint emissions can decrease uncertainty and save on the total abatement costs. Finally, we show that the dispersed ownership of China's farmland will bring high uncertainty and transaction costs into the PS/NPS offset system, even if the pollution abatement cost is lower than for point sources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Compliance Groundwater Monitoring of Nonpoint Sources - Emerging Approaches
NASA Astrophysics Data System (ADS)
Harter, T.
2008-12-01
Groundwater monitoring networks are typically designed for regulatory compliance of discharges from industrial sites. There, the quality of first encountered (shallow-most) groundwater is of key importance. Network design criteria have been developed for purposes of determining whether an actual or potential, permitted or incidental waste discharge has had or will have a degrading effect on groundwater quality. The fundamental underlying paradigm is that such discharge (if it occurs) will form a distinct contamination plume. Networks that guide (post-contamination) mitigation efforts are designed to capture the shape and dynamics of existing, finite-scale plumes. In general, these networks extend over areas less than one to ten hectare. In recent years, regulatory programs such as the EU Nitrate Directive and the U.S. Clean Water Act have forced regulatory agencies to also control groundwater contamination from non-incidental, recharging, non-point sources, particularly agricultural sources (fertilizer, pesticides, animal waste application, biosolids application). Sources and contamination from these sources can stretch over several tens, hundreds, or even thousands of square kilometers with no distinct plumes. A key question in implementing monitoring programs at the local, regional, and national level is, whether groundwater monitoring can be effectively used as a landowner compliance tool, as is currently done at point-source sites. We compare the efficiency of such traditional site-specific compliance networks in nonpoint source regulation with various designs of regional nonpoint source monitoring networks that could be used for compliance monitoring. We discuss advantages and disadvantages of the site vs. regional monitoring approaches with respect to effectively protecting groundwater resources impacted by nonpoint sources: Site-networks provide a tool to enforce compliance by an individual landowner. But the nonpoint source character of the contamination and its typically large spatial extend requires extensive networks at an individual site to accurately and fairly monitor individual compliance. In contrast, regional networks seemingly fail to hold individual landowners accountable. But regional networks can effectively monitor large-scale impacts and water quality trends; and thus inform regulatory programs that enforce management practices tied to nonpoint source pollution. Regional monitoring networks for compliance purposes can face significant challenges in the implementation due to a regulatory and legal landscape that is exclusively structured to address point sources and individual liability, and due to the non-intensive nature of a regional monitoring program (lack of control of hot spots; lack of accountability of individual landowners).
Point source moving above a finite impedance reflecting plane - Experiment and theory
NASA Technical Reports Server (NTRS)
Norum, T. D.; Liu, C. H.
1978-01-01
A widely used experimental version of the acoustic monopole consists of an acoustic driver of restricted opening forced by a discrete frequency oscillator. To investigate the effects of forward motion on this source, it was mounted above an automobile and driven over an asphalt surface at constant speed past a microphone array. The shapes of the received signal were compared to results computed from an analysis of a fluctuating-mass-type point source moving above a finite impedance reflecting plane. Good agreement was found between experiment and theory when a complex normal impedance representative of a fairly hard acoustic surface was used in the analysis.
Stochastic point-source modeling of ground motions in the Cascadia region
Atkinson, G.M.; Boore, D.M.
1997-01-01
A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
Angular displacement measuring device
NASA Technical Reports Server (NTRS)
Seegmiller, H. Lee B. (Inventor)
1992-01-01
A system for measuring the angular displacement of a point of interest on a structure, such as aircraft model within a wind tunnel, includes a source of polarized light located at the point of interest. A remote detector arrangement detects the orientation of the plane of the polarized light received from the source and compares this orientation with the initial orientation to determine the amount or rate of angular displacement of the point of interest. The detector arrangement comprises a rotating polarizing filter and a dual filter and light detector unit. The latter unit comprises an inner aligned filter and photodetector assembly which is disposed relative to the periphery of the polarizer so as to receive polarized light passing the polarizing filter and an outer aligned filter and photodetector assembly which receives the polarized light directly, i.e., without passing through the polarizing filter. The purpose of the unit is to compensate for the effects of dust, fog and the like. A polarization preserving optical fiber conducts polarized light from a remote laser source to the point of interest.
Geometrical analysis of an optical fiber bundle displacement sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Atsushi; Tanaka, Kohichi
1996-12-01
The performance of a multifiber optical lever was geometrically analyzed by extending the Cook and Hamm model [Appl. Opt. 34, 5854-5860 (1995)] for a basic seven-fiber optical lever. The generalized relationships between sensitivity and the displacement detection limit to the fiber core radius, illumination irradiance, and coupling angle were obtained by analyses of three various types of light source, i.e., a parallel beam light source, an infinite plane light source, and a point light source. The analysis of the point light source was confirmed by a measurement that used the light source of a light-emitting diode. The sensitivity of the fiber-optic lever is inversely proportional to the fiber core radius, whereas the receiving light power is proportional to the number of illuminating and receiving fibers. Thus, the bundling of the finer fiber with the larger number of illuminating and receiving fibers is more effective for improving sensitivity and the displacement detection limit.
Wang, Yandong; Yang, Jun; Liang, Jiping; Qiang, Yanfang; Fang, Shanqi; Gao, Minxue; Fan, Xiaoyu; Yang, Gaihe; Zhang, Baowen; Feng, Yongzhong
2018-08-15
The environmental behavior of farmers plays an important role in exploring the causes of non-point source pollution and taking scientific control and management measures. Based on the theory of planned behavior (TPB), the present study investigated the environmental behavior of farmers in the Water Source Area of the Middle Route of the South-to-North Water Diversion Project in China. Results showed that TPB could explain farmers' environmental behavior (SMC=0.26) and intention (SMC=0.36) well. Furthermore, the farmers' attitude towards behavior (AB), subjective norm (SN), and perceived behavioral control (PBC) positively and significantly influenced their environmental intention; their environmental intention further impacted their behavior. SN was proved to be the main key factor indirectly influencing the farmers' environmental behavior, while PBC had no significant and direct effect. Moreover, environmental knowledge following as a moderator, gender and age was used as control variables to conduct the environmental knowledge on TPB construct moderated mediation analysis. It demonstrated that gender had a significant controlling effect on environmental behavior; that is, males engage in more environmentally friendly behaviors. However, age showed a significant negative controlling effect on pro-environmental intention and an opposite effect on pro-environmental behavior. In addition, environmental knowledge could negatively moderate the relationship between PBC and environmental intention. PBC had a greater impact on the environmental intention of farmers with poor environmental knowledge, compared to those with plenty environmental knowledge. Altogether, the present study could provide a theoretical basis for non-point source pollution control and management. Copyright © 2018 Elsevier B.V. All rights reserved.
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
NASA Astrophysics Data System (ADS)
Borrello, M. C.; Scribner, M.; Chessin, K.
2013-12-01
A growing body of research draws attention to the negative environmental impacts on surface water from large livestock facilities. These impacts are mostly in the form of excessive nutrient loading resulting in significantly decreased oxygen levels. Over-application of animal waste on fields as well as direct discharge into surface water from facilities themselves has been identified as the main contributor to the development of hypoxic zones in Lake Erie, Chesapeake Bay and the Gulf of Mexico. Some regulators claim enforcement of water quality laws is problematic because of the nature and pervasiveness of non-point source impacts. Any direct discharge by a facility is a violation of permits governed by the Clean Water Act, unless the facility has special dispensation for discharge. Previous research by the principal author and others has shown runoff and underdrain transport are the main mechanisms by which nutrients enter surface water. This study utilized previous work to determine if the effects of non-point source discharge can be distinguished from direct (point-source) discharge using simple nutrient analysis and dissolved oxygen (DO) parameters. Nutrient and DO parameters were measured from three sites: 1. A stream adjacent to a field receiving manure, upstream of a large livestock facility with a history of direct discharge, 2. The same stream downstream of the facility and 3. A stream in an area relatively unimpacted by large-scale agriculture (control site). Results show that calculating a simple Pearson correlation coefficient (r) of soluble reactive phosphorus (SRP) and ammonia over time as well as temperature and DO, distinguishes non-point source from point source discharge into surface water. The r value for SRP and ammonia for the upstream site was 0.01 while the r value for the downstream site was 0.92. The control site had an r value of 0.20. Likewise, r values were calculated on temperature and DO for each site. High negative correlations between temperature and DO are indicative of a relatively unimpacted stream. Results from this study are commensurate with nutrient correlations and are: r = -0.97 for the upstream site, r = -0.21 for the downstream site and r = -0.89 for the control site. Results from every site tested were statistically significant (p ≤ 0.05). These results support previous studies and demonstrate that the simple analytical techniques mentioned provide an effective means for regulatory agencies and community groups to monitor and identify point source discharge from large livestock facilities.
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced
An analysis of lamp irradiation in ellipsoidal mirror furnaces
NASA Astrophysics Data System (ADS)
Rivas, Damián; Vázquez-Espí, Carlos
2001-03-01
The irradiation generated by halogen lamps in ellipsoidal mirror furnaces is analyzed, in configurations suited to the study of the floating-zone technique for crystal growth in microgravity conditions. A line-source model for the lamp (instead of a point source) is developed, so that the longitudinal extent of the filament is taken into account. With this model the case of defocussed lamps can be handle analytically. In the model the lamp is formed by an aggregate of point-source elements, placed along the axis of the ellipsoid. For these point sources (which, in general, are defocussed) an irradiation model is formulated, within the approximation of geometrical optics. The irradiation profiles obtained (both on the lateral surface and on the inner base of the cylindrical sample) are analyzed. They present singularities related to the caustics formed by the family of reflected rays; these caustics are also analyzed. The lamp model is combined with a conduction-radiation model to study the temperature field in the sample. The effects of defocussing the lamp (common practice in crystal growth) are studied; advantages and also some drawbacks are pointed out. Comparison with experimental results is made.
Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.
Huang, Hong; Zhang, Baifa; Lu, Jun
2014-01-01
We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Gatti, Carlo; Macetti, Giovanni; Boyd, Russell J; Matta, Chérif F
2018-07-05
The source function (SF) decomposes the electron density at any point into contributions from all other points in the molecule, complex, or crystal. The SF "illuminates" those regions in a molecule that most contribute to the electron density at a point of reference. When this point of reference is the bond critical point (BCP), a commonly used surrogate of chemical bonding, then the SF analysis at an atomic resolution within the framework of Bader's Quantum Theory of Atoms in Molecules returns the contribution of each atom in the system to the electron density at that BCP. The SF is used to locate the important regions that control the hydrogen bonds in both Watson-Crick (WC) DNA dimers (adenine:thymine (AT) and guanine:cytosine (GC)) which are studied in their neutral and their singly ionized (radical cationic and anionic) ground states. The atomic contributions to the electron density at the BCPs of the hydrogen bonds in the two dimers are found to be delocalized to various extents. Surprisingly, gaining or loosing an electron has similar net effects on some hydrogen bonds concealing subtle compensations traced to atomic sources contributions. Coarser levels of resolutions (groups, rings, and/or monomers-in-dimers) reveal that distant groups and rings often have non-negligible effects especially on the weaker hydrogen bonds such as the third weak CH⋅⋅⋅O hydrogen bond in AT. Interestingly, neither the purine nor the pyrimidine in the neutral or ionized forms dominate any given hydrogen bond despite that the former has more atoms that can act as source or sink for the density at its BCP. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Gravitational lensing of quasars as seen by the Hubble Space Telescope Snapshot Survey
NASA Technical Reports Server (NTRS)
Maoz, D.; Bahcall, J. N.; Doxsey, R.; Schneider, D. P.; Bahcall, N. A.; Lahav, O.; Yanny, B.
1992-01-01
Results from the ongoing HST Snapshot Survey are presented, with emphasis on 152 high-luminosity, z greater than 1 quasars. One quasar among those observed, 1208 + 1011, is a candidate lens system with subarcsecond image separation. Six other quasars have point sources within 6 arcsec. Ground-based observations of five of these cases show that the companion point sources are foreground Galactic stars. The predicted lensing frequency of the sample is calculated for a variety of cosmological models. The effect of uncertainties in some of the observational parameters upon the predictions is discussed. No correlation of the drift rate with time, right ascension, declination, or point error is found.
WHAT TO DO IF THERE IS NO REFERENCE SYSTEM
Efficient management and regulation of anthropogenic point source inputs requires a demonstrated and measurable ecological effect. In all ecosystems, the evaluation of anthropogenic effects is confounded by the effects of naturally existing environmental factors and river nonpoin...
LETTER TO EDITOR ON ARTICLE "ARSENIC MEANS BUSINESS"
The letter to the editor was written to point out that different forms of arsenic are found in source waters and that the technologies listed in the article such as POU RO will not necessarily be effective on all waters. The letter pointed out that most technologies are more eff...
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
Sediment delivery to the Gulf of Alaska: source mechanisms along a glaciated transform margin
Dobson, M.R.; O'Leary, D.; Veart, M.
1998-01-01
Sediment delivery to the Gulf of Alaska occurs via four areally extensive deep-water fans, sourced from grounded tidewater glaciers. During periods of climatic cooling, glaciers cross a narrow shelf and discharge sediment down the continental slope. Because the coastal terrain is dominated by fjords and a narrow, high-relief Pacific watershed, deposition is dominated by channellized point-source fan accumulations, the volumes of which are primarily a function of climate. The sediment distribution is modified by a long-term tectonic translation of the Pacific plate to the north along the transform margin. As a result, the deep-water fans are gradually moved away from the climatically controlled point sources. Sets of abandoned channels record the effect of translation during the Plio-Pleistocene.
Huang, Ning; Wang, Hong Ying; Lin, Tao; Liu, Qi Ming; Huang, Yun Feng; Li, Jian Xiong
2016-10-01
Watershed landscape pattern regulation and optimization based on 'source-sink' theory for non-point source pollution control is a cost-effective measure and still in the exploratory stage. Taking whole watershed as the research object, on the basis of landscape ecology, related theories and existing research results, a regulation framework of watershed landscape pattern for non-point source pollution control was developed at two levels based on 'source-sink' theory in this study: 1) at watershed level: reasonable basic combination and spatial pattern of 'source-sink' landscape was analyzed, and then holistic regulation and optimization method of landscape pattern was constructed; 2) at landscape patch level: key 'source' landscape was taken as the focus of regulation and optimization. Firstly, four identification criteria of key 'source' landscape including landscape pollutant loading per unit area, landscape slope, long and narrow transfer 'source' landscape, pollutant loading per unit length of 'source' landscape along the riverbank were developed. Secondly, nine types of regulation and optimization methods for different key 'source' landscape in rural and urban areas were established, according to three regulation and optimization rules including 'sink' landscape inlay, banding 'sink' landscape supplement, pollutants capacity of original 'sink' landscape enhancement. Finally, the regulation framework was applied for the watershed of Maluan Bay in Xiamen City. Holistic regulation and optimization mode of watershed landscape pattern of Maluan Bay and key 'source' landscape regulation and optimization measures for the three zones were made, based on GIS technology, remote sensing images and DEM model.
Evaluation of selective vs. point-source perforating for hydraulic fracturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, P.J.; Kerley, L.
1996-12-31
This paper is a case history comparing and evaluating the effects of fracturing the Reef Ridge Diatomite formation in the Midway-Sunset Field, Kern County, California, using {open_quotes}select-fire{close_quotes} and {open_quotes}point-source{close_quotes} perforating completions. A description of the reservoir, production history, and fracturing techniques used leading up to this study is presented. Fracturing treatment analysis and production history matching were used to evaluate the reservoir and fracturing parameters for both completion types. The work showed that single fractures were created with the point-source (PS) completions, and multiple fractures resulted from many of the select-fire (SF) completions. A good correlation was developed between productivitymore » and the product of formation permeability, net fracture height, bottomhole pressure, and propped fracture length. Results supported the continued development of 10 wells using the PS concept with a more efficient treatment design, resulting in substantial cost savings.« less
Laboratory Measurement of the Brighter-fatter Effect in an H2RG Infrared Detector
NASA Astrophysics Data System (ADS)
Plazas, A. A.; Shapiro, C.; Smith, R.; Huff, E.; Rhodes, J.
2018-06-01
The “brighter-fatter” (BF) effect is a phenomenon—originally discovered in charge coupled devices—in which the size of the detector point-spread function (PSF) increases with brightness. We present, for the first time, laboratory measurements demonstrating the existence of the effect in a Hawaii-2RG HgCdTe near-infrared (NIR) detector. We use JPL’s Precision Projector Laboratory, a facility for emulating astronomical observations with UV/VIS/NIR detectors, to project about 17,000 point sources onto the detector to stimulate the effect. After calibrating the detector for nonlinearity with flat-fields, we find evidence that charge is nonlinearly shifted from bright pixels to neighboring pixels during exposures of point sources, consistent with the existence of a BF-type effect. NASAs Wide Field Infrared Survey Telescope (WFIRST) will use similar detectors to measure weak gravitational lensing from the shapes of hundreds of million of galaxies in the NIR. The WFIRST PSF size must be calibrated to ≈0.1% to avoid biased inferences of dark matter and dark energy parameters; therefore further study and calibration of the BF effect in realistic images will be crucial.
McLerran, Larry; Skokov, Vladimir V.
2016-09-19
We modify the McLerran–Venugopalan model to include only a finite number of sources of color charge. In the effective action for such a system of a finite number of sources, there is a point-like interaction and a Coulombic interaction. The point interaction generates the standard fluctuation term in the McLerran–Venugopalan model. The Coulomb interaction generates the charge screening originating from well known evolution in x. Such a model may be useful for computing angular harmonics of flow measured in high energy hadron collisions for small systems. In this study we provide a basic formulation of the problem on a lattice.
Polarization from Thomson scattering of the light of a spherical, limb-darkened star
NASA Technical Reports Server (NTRS)
Rudy, R. J.
1979-01-01
The polarized flux produced by the Thomson scattering of the light of a spherical, limb-darkened star by optically thin, extrastellar regions of electrons is calculated and contrasted to previous models which treated the star as a point source. The point-source approximation is found to be valid for scattering by particles more than a stellar radius from the surface of the star but is inappropriate for those lying closer. The specific effect of limb darkening on the fractional polarization of the total light of a system is explored. If the principal source of light is the unpolarized flux of the star, the polarization is nearly independent of limb darkening.
The distribution of infrared point sources in nearby elliptical galaxies
NASA Astrophysics Data System (ADS)
Gogoi, Rupjyoti; Shalima, P.; Misra, Ranjeev
2018-02-01
Infrared (IR) point sources as observed by Spitzer, in nearby early-type galaxies should either be bright sources in the galaxy such as globular clusters, or they may be background sources such as AGNs. These objects are often counterparts of sources in other wavebands such as optical and X-rays and the IR information provides crucial information regarding their nature. However, many of the IR sources may be background objects and it is important to identify them or at least quantify the level of background contamination. Moreover, the distribution of these IR point sources in flux, distance from the centre and colour would be useful in understanding their origin. Archival Spitzer IRAC images provide a unique opportunity for such a study and here we present the results of such an analysis for four nearby galaxies, NGC 1399, NGC 2768, NGC 4365 and NGC 4649. We estimate the background contamination using several blank fields. Our results suggest that IR colours can be effectively used to differentiate between sources in the galaxy and background ones. In particular we find that sources having AGN like colours are indeed consistent with being background AGNs. For sources with non AGN like colours we compute the distribution of flux and normalised distance from the centre which is found to be of a power-law form. Although our sample size is small, the power-law index for the galaxies are different indicating perhaps that the galaxy environment may be playing a part in their origin and nature.
Determination of flash point in air and pure oxygen using an equilibrium closed bomb apparatus.
Kong, Dehong; am Ende, David J; Brenek, Steven J; Weston, Neil P
2003-08-29
The standard closed testers for flash point measurements may not be feasible for measuring flash point in special atmospheres like oxygen because the test atmosphere cannot be maintained due to leakage and the laboratory safety can be compromised. To address these limitations we developed a new "equilibrium closed bomb" (ECB). The ECB generally gives lower flash point values than standard closed cup testers as shown by the results of six flammable liquids. The present results are generally in good agreement with the values calculated from the reported lower flammability limits and the vapor pressures. Our measurements show that increased oxygen concentration had little effect on the flash points of the tested flammable liquids. While generally regarded as non-flammable because of the lack of observed flash point in standard closed cup flash point testers, dichloromethane is known to form flammable mixtures. The flash point of dichloromethane in oxygen measured in the ECB is -7.1 degrees C. The flash point of dichloromethane in air is dependent on the type and energy of the ignition source. Further research is being carried out to establish the relationship between the flash point of dichloromethane and the energy of the ignition source.
The acoustic field of a point source in a uniform boundary layer over an impedance plane
NASA Technical Reports Server (NTRS)
Zorumski, W. E.; Willshire, W. L., Jr.
1986-01-01
The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.
Ivahnenko, Tamara; Ortiz, Roderick F.; Stogner, Sr., Robert W.
2013-01-01
As a result of continued water-quality concerns in the Arkansas River, including metal contamination from historical mining practices, potential effects associated with storage and movement of water, point- and nonpoint-source contamination, population growth, storm-water flows, and future changes in land and water use, the Arkansas River Basin Regional Resource Planning Group (RRPG) developed a strategy to address these issues. As such, a cooperative strategic approach to address the multiple water-quality concerns within selected reaches of the Arkansas River was developed to (1) identify stream reaches where stream-aquifer interactions have a pronounced effect on water quality and (or) where reactive transport, and physical and (or) chemical alteration of flow during conveyance, is occurring, (2) quantify loading from point sources, and (3) determine source areas and mass loading for selected constituents. (To see the complete abstract, open Report PDF.)
NASA Astrophysics Data System (ADS)
Tong, X. X.; Hu, B.; Xu, W. S.; Liu, J. G.; Zhang, P. C.
2017-12-01
In this paper, Three Gorges Reservoir Area (TGRA) was chosen to be the study area, the export coefficients of different land-use type were calculated through the observation experiments and literature consultation, and then the load of non-point source (NPS) nitrogen and phosphorus of different pollution sources such as farmland pollution sources, decentralized livestock and poultry breeding pollution sources and domestic pollution sources were estimated. The results show as follows: the pollution load of dry land is the main source of farmland pollution. The order of total nitrogen load of different pollution sources from high to low is livestock breeding pollution, domestic pollution, land use pollution, while the order of phosphorus load of different pollution sources from high to low is land use pollution, livestock breeding pollution, domestic pollution, Therefore, reasonable farmland management, effective control methods of dry land fertilization and sewage discharge of livestock breeding are the keys to the prevention and control of NPS nitrogen and phosphorus in TGRA.
THE SOURCE STRUCTURE OF 0642+449 DETECTED FROM THE CONT14 OBSERVATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Ming H.; Wang, Guang L.; Heinkelmann, Robert
2016-11-01
The CONT14 campaign with state-of-the-art very long baseline interferometry (VLBI) data has observed the source 0642+449 with about 1000 observables each day during a continuous observing period of 15 days, providing tens of thousands of closure delays—the sum of the delays around a closed loop of baselines. The closure delay is independent of the instrumental and propagation delays and provides valuable additional information about the source structure. We demonstrate the use of this new “observable” for the determination of the structure in the radio source 0642+449. This source, as one of the defining sources in the second realization of themore » International Celestial Reference Frame, is found to have two point-like components with a relative position offset of −426 microarcseconds ( μ as) in R.A. and −66 μ as in decl. The two components are almost equally bright, with a flux-density ratio of 0.92. The standard deviation of closure delays for source 0642+449 was reduced from 139 to 90 ps by using this two-component model. Closure delays larger than 1 ns are found to be related to the source structure, demonstrating that structure effects for a source with this simple structure could be up to tens of nanoseconds. The method described in this paper does not rely on a priori source structure information, such as knowledge of source structure determined from direct (Fourier) imaging of the same observations or observations at other epochs. We anticipate our study to be a starting point for more effective determination of the structure effect in VLBI observations.« less
A NEW LOOK AT CUSTOMS UNION THEORY,
is the sole source of any gain in consumers ’ welfare that might result from a customs union. It accounts for both trade creation and the consumption...In the report the following points are discussed: (1) Analytically the welfare effect of a customs union -whether trade creating, trade diverting...effect. (3) Using as a point of reference an appropriate policy of nonpreferential protection, a customs union necessarily results in pure trade
Developing a Near Real-time System for Earthquake Slip Distribution Inversion
NASA Astrophysics Data System (ADS)
Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen
2016-04-01
Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
MODELING MINERAL NITROGEN EXPORT FROM A FOREST TERRESTRIAL ECOSYSTEM TO STREAMS
Terrestrial ecosystems are major sources of N pollution to aquatic ecosystems. Predicting N export to streams is a critical goal of non-point source modeling. This study was conducted to assess the effect of terrestrial N cycling on stream N export using long-term monitoring da...
Effect of feed source and pyrolysis conditions on properties and metal sorption by sugarcane biochar
USDA-ARS?s Scientific Manuscript database
Population growth along with urbanization expansion and intensification of arable land management burdens natural systems ability to sustain ecosystem services such as clean waters. Development of low-cost sorbents for use in non-point-source runoff-water infiltration systems is essential for improv...
USDA-ARS?s Scientific Manuscript database
Conservation practices are effective ways to mitigate non-point source pollution, especially when implemented on critical source areas (CSAs) known to be the areas contributing disproportionately to high pollution loads. Although hydrologic models are promising tools to identify CSAs within agricul...
Energy harvesting influences electrochemical performance of microbial fuel cells
NASA Astrophysics Data System (ADS)
Lobo, Fernanda Leite; Wang, Xin; Ren, Zhiyong Jason
2017-07-01
Microbial fuel cells (MFCs) can be effective power sources for remote sensing, wastewater treatment and environmental remediation, but their performance needs significant improvement. This study systematically analyzes how active harvesting using electrical circuits increased MFC system outputs as compared to passive resistors not only in the traditional maximal power point (MPP) but also in other desired operating points such as the maximum current point (MCP) and the maximum voltage point (MVP). Results show that active harvesting in MPP increased power output by 81-375% and active harvesting in MCP increased Coulombic efficiency by 207-805% compared with resisters operated at the same points. The cyclic voltammograms revealed redox potential shifts and supported the performance data. The findings demonstrate that active harvesting is a very effective approach to improve MFC performance across different operating points.
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
Answering questions at the point of care: do residents practice EBM or manage information sources?
McCord, Gary; Smucker, William D; Selius, Brian A; Hannan, Scott; Davidson, Elliot; Schrop, Susan Labuda; Rao, Vinod; Albrecht, Paula
2007-03-01
To determine the types of information sources that evidence-based medicine (EBM)-trained, family medicine residents use to answer clinical questions at the point of care, to assess whether the sources are evidence-based, and to provide suggestions for more effective information-management strategies in residency training. In 2005, trained medical students directly observed (for two half-days per physician) how 25 third-year family medicine residents retrieved information to answer clinical questions arising at the point of care and documented the type and name of each source, the retrieval location, and the estimated time spent consulting the source. An end-of-study questionnaire asked 37 full-time faculty and the participating residents about the best information sources available, subscriptions owned, why they use a personal digital assistant (PDA) to practice medicine, and their experience in preventing medical errors using a PDA. Forty-four percent of questions were answered by attending physicians, 23% by consulting PDAs, and 20% from books. Seventy-two percent of questions were answered within two minutes. Residents rated UptoDate as the best source for evidence-based information, but they used this source only five times. PDAs were used because of ease of use, time factors, and accessibility. All examples of medical errors discovered or prevented with PDA programs were medication related. None of the participants' residencies required the use of a specific medical information resource. The results support the Agency for Health Care Research and Quality's call for medical system improvements at the point of care. Additionally, it may be necessary to teach residents better information-management skills in addition to EBM skills.
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
NASA Astrophysics Data System (ADS)
Lakshmi, V.; Sen, I. S.; Mishra, G.
2017-12-01
There has been much discussion amongst biologists, ecologists, chemists, geologists, environmental firms, and science policy makers about the impact of human activities on river health. As a result, multiple river restoration projects are on going on many large river basins around the world. In the Indian subcontinent, the Ganges River is the focal point of all restoration actions as it provides food and water security to half a billion people. Serious concerns have been raised about the quality of Ganga water as toxic chemicals and many more enters the river system through point-sources such as direct wastewater discharge to rivers, or non-point-sources. Point source pollution can be easily identified and remedial actions can be taken; however, non-point pollution sources are harder to quantify and mitigate. A large non-point pollution source in the Indo-Gangetic floodplain is the network of small floodplain rivers. However, these rivers are rarely studied since they are small in catchment area ( 1000-10,000 km2) and discharge (<100 m3/s). As a result, the impact of these small floodplain rivers on the dissolved chemical load of large river systems is not constrained. To fill this knowledge gap we have monitored the Pandu River for one year between February 2015 and April 2016. Pandu river is 242 km long and is a right bank tributary of Ganges with a total catchment area of 1495 km2. Water samples were collected every month for dissolved major and trace elements. Here we show that the concentration of heavy metals in river Pandu is in higher range as compared to the world river average, and all the dissolved elements shows a large spatial-temporal variation. We show that the Pandu river exports 192170, 168517, 57802, 32769, 29663, 1043, 279, 241, 225, 162, 97, 28, 25, 22, 20, 8, 4 Kg/yr of Ca, Na, Mg, K, Si, Sr, Zn, B, Ba, Mn, Al, Li, Rb, Mo, U, Cu, and Sb, respectively, to the Ganga river, and the exported chemical flux effects the water chemistry of the Ganga river downstream of its confluence point. We further speculate that small floodplain rivers is an important source that contributes to the dissolved chemical budget of large river systems, and they must be better monitored to address future challenges in river basin management.
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
Staphylococcus xylosus fermentation of pork fatty waste: raw material for biodiesel production.
Marques, Roger Vasques; Paz, Matheus Francisco da; Duval, Eduarda Hallal; Corrêa, Luciara Bilhalva; Corrêa, Érico Kunde
2016-01-01
The need for cleaner sources of energy has stirred research into utilising alternate fuel sources with favourable emission and sustainability such as biodiesel. However, there are technical constraints that hinder the widespread use of some of the low cost raw materials such as pork fatty wastes. Currently available technology permits the use of lipolytic microorganisms to sustainably produce energy from fat sources; and several microorganisms and their metabolites are being investigated as potential energy sources. Thus, the aim of this study was to characterise the process of Staphylococcus xylosus mediated fermentation of pork fatty waste. We also wanted to explore the possibility of fermentation effecting a modification in the lipid carbon chain to reduce its melting point and thereby act directly on one of the main technical barriers to obtaining biodiesel from this abundant source of lipids. Pork fatty waste was obtained from slaughterhouses in southern Brazil during evisceration of the carcasses and the kidney casing of slaughtered animals was used as feedstock. Fermentation was performed in BHI broth with different concentrations of fatty waste and for different time periods which enabled evaluation of the effect of fermentation time on the melting point of swine fat. The lowest melting point was observed around 46°C, indicating that these chemical and biological reactions can occur under milder conditions, and that such pre-treatment may further facilitate production of biodiesel from fatty animal waste. Copyright © 2016 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hongfen, E-mail: wanghongfen11@163.com; Wang, Zhiqi; Chen, Shougang
Molybdenum carbides with surfactants as carbon sources were prepared using the carbothermal reduction of the appropriate precursors (molybdenum oxides deposited on surfactant micelles) at 1023 K under hydrogen gas. The carburized products were characterized using scanning electron microscopy (SEM), X-ray diffraction and BET surface area measurements. From the SEM images, hollow microspherical and rod-like molybdenum carbides were observed. X-ray diffraction patterns showed that the annealing time of carburization had a large effect on the conversion of molybdenum oxides to molybdenum carbides. And BET surface area measurements indicated that the difference of carbon sources brought a big difference in specific surfacemore » areas of molybdenum carbides. - Graphical abstract: Molybdenum carbides having hollow microspherical and hollow rod-like morphologies that are different from the conventional monodipersed platelet-like morphologies. Highlights: Black-Right-Pointing-Pointer Molybdenum carbides were prepared using surfactants as carbon sources. Black-Right-Pointing-Pointer The kinds of surfactants affected the morphologies of molybdenum carbides. Black-Right-Pointing-Pointer The time of heat preservation at 1023 K affected the carburization process. Black-Right-Pointing-Pointer Molybdenum carbides with hollow structures had larger specific surface areas.« less
The effect of directivity in a PSHA framework
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2012-09-01
We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.
Effects of Processing on MOS Radiation Hardening
1992-09-01
magnitude) impurity inclusion for Fluorine sources vs. Chlorine sources, many of the other beneficial effects on point defects, traps, MOS quality, etc...profiles. The addition of percent concentrations of a chlorine rates from 1 to 10 ml/nmr. This corresponded to fluorine bearing compound to the silicon...computes the partial roethane and gaseous nitrogen trifluoride . The alphatic liq- pres-ures of all the possible species in equilibrium in the uid
Wiechman, Shelley A; McMullen, Kara; Carrougher, Gretchen J; Fauerbach, Jame A; Ryan, Colleen M; Herndon, David N; Holavanahalli, Radha; Gibran, Nicole S; Roaten, Kimberly
2017-12-16
To identify important sources of distress among burn survivors at discharge and 6, 12, and 24 months postinjury, and to examine if the distress related to these sources changed over time. Exploratory. Outpatient burn clinics in 4 sites across the country. Participants who met preestablished criteria for having a major burn injury (N=1009) were enrolled in this multisite study. Participants were given a previously developed list of 12 sources of distress among burn survivors and asked to rate on a 10-point Likert-type scale (0=no distress to 10=high distress) how much distress each of the 12 issues was causing them at the time of each follow-up. The Medical Outcomes Study 12-Item Short-Form Health Survey was administered at each time point as a measure of health-related quality of life. The Satisfaction With Appearance Scale was used to understand the relation between sources of distress and body image. Finally, whether a person returned to work was used to determine the effect of sources of distress on returning to employment. It was encouraging that no symptoms were worsening at 2 years. However, financial concerns and long recovery time are 2 of the highest means at all time points. Pain and sleep disturbance had the biggest effect on ability to return to work. These findings can be used to inform burn-specific interventions and to give survivors an understanding of the temporal trajectory for various causes of distress. In particular, it appears that interventions targeted at sleep disturbance and high pain levels can potentially effect distress over financial concerns by allowing a person to return to work more quickly. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding
2017-04-01
Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed model can be used to identify the effective operation time of FGD and SCR equipped in coal-fired power plant.
NASA Astrophysics Data System (ADS)
Gao, Mingxing; Jing, Hongwei; Cao, Xuedong; Chen, Lin; Yang, Jie
2015-08-01
When using the swing arm profilometer (SAP) to measure the aspheric mirror and the off-axis aspheric mirror, the error of the effective arm length of the SAP has an obvious influence on the measurement result. In order to reduce the influence of the effective arm length and increase the measurement accuracy of the SAP, the laser tracker is adopted to measure the effective arm length. Because the space position relationship of the probe system for the SAP is needed to measured before using the laser tracker, the point source microscope (PSM) is used to measure the space positional relationship. The measurement principle of the PSM and other applications are introduced; the accuracy and repeatability of this technology are analysed; the advantages and disadvantages of this technology are summarized.
40 CFR Appendix F to Part 132 - Great Lakes Water Quality Initiative Implementation Procedures
Code of Federal Regulations, 2012 CFR
2012-07-01
... the structure of the aquatic food web and the disequilibrium constant, are different at the site than..., the TMDL shall also indicate the point source effluent flows assumed in the analyses. Mass loading... more proximate sources interact or overlap, the combined effect must be evaluated to ensure that...
40 CFR Appendix F to Part 132 - Great Lakes Water Quality Initiative Implementation Procedures
Code of Federal Regulations, 2014 CFR
2014-07-01
... the structure of the aquatic food web and the disequilibrium constant, are different at the site than..., the TMDL shall also indicate the point source effluent flows assumed in the analyses. Mass loading... more proximate sources interact or overlap, the combined effect must be evaluated to ensure that...
OGLE-2003-BLG-262: Finite-Source Effects from a Point-Mass Lens
NASA Astrophysics Data System (ADS)
Yoo, Jaiyul; DePoy, D. L.; Gal-Yam, A.; Gaudi, B. S.; Gould, A.; Han, C.; Lipkin, Y.; Maoz, D.; Ofek, E. O.; Park, B.-G.; Pogge, R. W.; Mu-Fun Collaboration; Udalski, A.; Soszyński, I.; Wyrzykowski, Ł.; Kubiak, M.; Szymański, M.; Pietrzyński, G.; Szewczyk, O.; Żebruń, K.; OGLE Collaboration
2004-03-01
We analyze OGLE-2003-BLG-262, a relatively short (tE=12.5+/-0.1 day) microlensing event generated by a point-mass lens transiting the face of a K giant source in the Galactic bulge. We use the resulting finite-source effects to measure the angular Einstein radius, θE=195+/-17 μas, and so constrain the lens mass to the FWHM interval 0.08
Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control
NASA Technical Reports Server (NTRS)
Ku, Jentung; Paiva, Kleber; Mantelli, Marcia
2011-01-01
The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.
Kim, Geonha; Hur, Jin
2010-01-01
This research measured the mortality rates of pathogen indicator microorganisms discharged from various point and non-point sources in an urban area. Water samples were collected from a domestic sewer, a combined sewer overflow, the effluent of a wastewater treatment plant, and an urban river. Mortality rates of indicator microorganisms in sediment of an urban river were also measured. Mortality rates of indicator microorganisms in domestic sewage, estimated by assuming first order kinetics at 20 degrees C were 0.197 day(-1), 0.234 day(-1), 0.258 day(-1) and 0.276 day(-1) for total coliform, fecal coliform, Escherichia coli, and fecal streptococci, respectively. Effects of temperature, sunlight irradiation and settlement on the mortality rate were measured. Results of this research can be used as input data for water quality modeling or can be used as design factors for treatment facilities.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
NASA Astrophysics Data System (ADS)
Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu
2016-02-01
We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lew, Bartosz; Kus, Andrzej; Birkinshaw, Mark
We investigate the effectiveness of blind surveys for radio sources and galaxy cluster thermal Sunyaev-Zel'dovich effects (TSZEs) using the four-pair, beam-switched OCRA-f radiometer on the 32-m radio telescope in Poland. The predictions are based on mock maps that include the cosmic microwave background, TSZEs from hydrodynamical simulations of large scale structure formation, and unresolved radio sources. We validate the mock maps against observational data, and examine the limitations imposed by simplified physics. We estimate the effects of source clustering towards galaxy clusters from NVSS source counts around Planck-selected cluster candidates, and include appropriate correlations in our mock maps. The studymore » allows us to quantify the effects of halo line-of-sight alignments, source confusion, and telescope angular resolution on the detections of TSZEs. We perform a similar analysis for the planned 100-m Hevelius radio telescope (RTH) equipped with a 49-beam radio camera and operating at frequencies up to 22 GHz.We find that RT32/OCRA-f will be suitable for small-field blind radio source surveys, and will detect 33{sup +17}{sub −11} new radio sources brighter than 0.87 mJy at 30 GHz in a 1 deg{sup 2} field at > 5σ CL during a one-year, non-continuous, observing campaign, taking account of Polish weather conditions. It is unlikely that any galaxy cluster will be detected at 3σ CL in such a survey. A 60-deg{sup 2} survey, with field coverage of 2{sup 2} beams per pixel, at 15 GHz with the RTH, would find <1.5 galaxy clusters per year brighter than 60 μJy (at 3σ CL), and would detect about 3.4 × 10{sup 4} point sources brighter than 1 mJy at 5σ CL, with confusion causing flux density errors ∼< 2% (20%) in 68% (95%) of the detected sources.A primary goal of the planned RTH will be a wide-area (π sr) radio source survey at 15 GHz. This survey will detect nearly 3 × 10{sup 5} radio sources at 5σ CL down to 1.3 mJy, and tens of galaxy clusters, in one year of operation with typical weather conditions. Confusion will affect the measured flux densities by ∼< 1.5% (16%) for 68% (95%) of the point sources. We also gauge the impact of the RTH by investigating its performance if equipped with the existing RT32 receivers, and the performance of the RT32 equipped with the RTH radio camera.« less
Exploring X-Ray Binary Populations in Compact Group Galaxies With Chandra
NASA Technical Reports Server (NTRS)
Tzanavaris, P.; Hornschemeier, A. E..; Gallagher, S. C.; Lenkic, L.; Desjardins, T. D.; Walker, L. M.; Johnson, K. E.; Mulchaey, J. S.
2016-01-01
We obtain total galaxy X-ray luminosities, LX, originating from individually detected point sources in a sample of 47 galaxies in 15 compact groups of galaxies (CGs). For the great majority of our galaxies, we find that the detected point sources most likely are local to their associated galaxy, and are thus extragalactic X-ray binaries (XRBs) or nuclear active galactic nuclei (AGNs). For spiral and irregular galaxies, we find that, after accounting for AGNs and nuclear sources, most CG galaxies are either within the +/-1s scatter of the Mineo et al. LX-star formation rate (SFR) correlation or have higher LX than predicted by this correlation for their SFR. We discuss how these "excesses" may be due to low metallicities and high interaction levels. For elliptical and S0 galaxies, after accounting for AGNs and nuclear sources, most CG galaxies are consistent with the Boroson et al. LX-stellar mass correlation for low-mass XRBs, with larger scatter, likely due to residual effects such as AGN activity or hot gas. Assuming non-nuclear sources are low- or high-mass XRBs, we use appropriate XRB luminosity functions to estimate the probability that stochastic effects can lead to such extreme LX values. We find that, although stochastic effects do not in general appear to be important, for some galaxies there is a significant probability that high LX values can be observed due to strong XRB variability.
Point source sulphur dioxide peaks and hospital presentations for asthma.
Donoghue, A M; Thomas, M
1999-04-01
To examine the effect on hospital presentations for asthma of brief exposures to sulphur dioxide (SO2) (within the range 0-8700 micrograms/m3) emanating from two point sources in a remote rural city of 25,000 people. A time series analysis of SO2 concentrations and hospital presentations for asthma was undertaken at Mount Isa where SO2 is released into the atmosphere by a copper smelter and a lead smelter. The study examined 5 minute block mean SO2 concentrations and daily hospital presentations for asthma, wheeze, or shortness of breath. Generalised linear models and generalised additive models based on a Poisson distribution were applied. There was no evidence of any positive relation between peak SO2 concentrations and hospital presentations or admissions for asthma, wheeze, or shortness of breath. Brief exposures to high concentrations of SO2 emanating from point sources at Mount Isa do not cause sufficiently serious symptoms in asthmatic people to require presentation to hospital.
Effect of an overhead shield on gamma-ray skyshine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stedry, M.H.; Shultis, J.K.; Faw, R.E.
1996-06-01
A hybrid Monte Carlo and integral line-beam method is used to determine the effect of a horizontal slab shield above a gamma-ray source on the resulting skyshine doses. A simplified Monte Carlo procedure is used to determine the energy and angular distribution of photons escaping the source shield into the atmosphere. The escaping photons are then treated as a bare, point, skyshine source, and the integral line-beam method is used to estimate the skyshine dose at various distances from the source. From results for arbitrarily collimated and shielded sources, the skyshine dose is found to depend primarily on the mean-free-pathmore » thickness of the shield and only very weakly on the shield material.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
Liu, X; Zhai, Z
2008-02-01
Indoor pollutions jeopardize human health and welfare and may even cause serious morbidity and mortality under extreme conditions. To effectively control and improve indoor environment quality requires immediate interpretation of pollutant sensor readings and accurate identification of indoor pollution history and source characteristics (e.g. source location and release time). This procedure is complicated by non-uniform and dynamic contaminant indoor dispersion behaviors as well as diverse sensor network distributions. This paper introduces a probability concept based inverse modeling method that is able to identify the source location for an instantaneous point source placed in an enclosed environment with known source release time. The study presents the mathematical models that address three different sensing scenarios: sensors without concentration readings, sensors with spatial concentration readings, and sensors with temporal concentration readings. The paper demonstrates the inverse modeling method and algorithm with two case studies: air pollution in an office space and in an aircraft cabin. The predictions were successfully verified against the forward simulation settings, indicating good capability of the method in finding indoor pollutant sources. The research lays a solid ground for further study of the method for more complicated indoor contamination problems. The method developed can help track indoor contaminant source location with limited sensor outputs. This will ensure an effective and prompt execution of building control strategies and thus achieve a healthy and safe indoor environment. The method can also assist the design of optimal sensor networks.
Overlapped optics induced perfect coherent effects.
Li, Jian Jie; Zang, Xiao Fei; Mao, Jun Fa; Tang, Min; Zhu, Yi Ming; Zhuang, Song Lin
2013-12-20
For traditional coherent effects, two separated identical point sources can be interfered with each other only when the optical path difference is integer number of wavelengths, leading to alternate dark and bright fringes for different optical path difference. For hundreds of years, such a perfect coherent condition seems insurmountable. However, in this paper, based on transformation optics, two separated in-phase identical point sources can induce perfect interference with each other without satisfying the traditional coherent condition. This shifting illusion media is realized by inductor-capacitor transmission line network. Theoretical analysis, numerical simulations and experimental results are performed to confirm such a kind of perfect coherent effect and it is found that the total radiation power of multiple elements system can be greatly enhanced. Our investigation may be applicable to National Ignition Facility (NIF), Inertial Confined Fusion (ICF) of China, LED lighting technology, terahertz communication, and so on.
NASA Astrophysics Data System (ADS)
Chen, Li-si; Hu, Zhong-wen
2017-10-01
The image evaluation of an optical system is the core of optical design. Based on the analysis and comparison of the PSSN (Normalized Point Source Sensitivity) proposed in the image evaluation of the TMT (Thirty Meter Telescope) and the common image evaluation methods, the application of PSSN in the TMT WFOS (Wide Field Optical Spectrometer) is studied. It includes an approximate simulation of the atmospheric seeing, the effect of the displacement of M3 in the TMT on the PSSN of the system, the effect of the displacement of collimating mirror in the WFOS on the PSSN of the system, the relations between the PSSN and the zenith angle under different conditions of atmospheric turbulence, and the relation between the PSSN and the wavefront aberration. The results show that the PSSN is effective for the image evaluation of the TMT under a limited atmospheric seeing.
Chen, Andrew C N; Liu, Feng-Jun; Wang, Li; Arendt-Nielsen, Lars
2006-02-15
This study determined: (a) if acupuncture stimulation at a traditional site might modulate ongoing EEG as compared with stimulation of a control site; (b) if high-frequency vs. low-frequency stimulation could exert differential effects of acupuncture; (c) if the observed effects of acupuncture were specific to certain EEG bands; and (d) if the acupuncture effect could be isolated at a specific scalp field, with its putative underlying intracranial source. Twelve healthy male volunteers (age range 22-35) participated in two experimental sessions separated by 1 week, which involved transcutaneous acupoint stimulation at selected acupoint (Li 4, HeGu) vs. a mock point at the fourth interosseous muscle area on the left hand in high (HF: 100 Hz) vs. low-frequency (LF: 2 Hz) stimulation by counter-balanced order. 124-ch EEG data were used to analyze the Delta, Theta, Alpha-1, Alpha-2, Beta, and Gamma bands. The absolute EEG powers (muv2) at focal maxima across three stages (baseline, stimulation, post) were examined by two-way (condition, stage) repeated measures ANOVA. The activity of the Theta power significantly decreased (P = 0.02), compared with control during HF but not LF stimulation at acupoint stimulation, however, there was no study effect at the mock point. A decreased Theta EEG power was prominent at the frontal midline sites (FCz, Fz) and the contralateral right hemisphere front site (FCC2h). In contrast, the Theta power of low-frequency stimulation showed an increase from the baseline as those in both controlled mock point stimulations. The observed high-frequency acupoint stimulation effects of Theta EEG were only present during, but not after, simulation. The topographic Theta activity was tentatively identified to originate from the intracranial current source in cingulate cortex, likely ACC. It is likely that short-term cortical plasticity occurs during high-frequency but not low-frequency stimulation at the HeGu point, but not mock point. We suggest that HeGu acupuncture stimulation modulates limbic cingulum by a frequency modulation mode, which then may damp nociceptive processing in the brain.
A double-correlation tremor-location method
NASA Astrophysics Data System (ADS)
Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur
2017-02-01
A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.
Catchment-wide impacts on water quality: the use of 'snapshot' sampling during stable flow
NASA Astrophysics Data System (ADS)
Grayson, R. B.; Gippel, C. J.; Finlayson, B. L.; Hart, B. T.
1997-12-01
Water quality is usually monitored on a regular basis at only a small number of locations in a catchment, generally focused at the catchment outlet. This integrates the effect of all the point and non-point source processes occurring throughout the catchment. However, effective catchment management requires data which identify major sources and processes. As part of a wider study aimed at providing technical information for the development of integrated catchment management plans for a 5000 km 2 catchment in south eastern Australia, a 'snapshot' of water quality was undertaken during stable summer flow conditions. These low flow conditions exist for long periods so water quality at these flow levels is an important constraint on the health of in-stream biological communities. Over a 4 day period, a study of the low flow water quality characteristics throughout the Latrobe River catchment was undertaken. Sixty-four sites were chosen to enable a longitudinal profile of water quality to be established. All tributary junctions and sites along major tributaries, as well as all major industrial inputs were included. Samples were analysed for a range of parameters including total suspended solids concentration, pH, dissolved oxygen, electrical conductivity, turbidity, flow rate and water temperature. Filtered and unfiltered samples were taken from 27 sites along the main stream and tributary confluences for analysis of total N, NH 4, oxidised N, total P and dissolved reactive P concentrations. The data are used to illustrate the utility of this sampling methodology for establishing specific sources and estimating non-point source loads of phosphorous, total suspended solids and total dissolved solids. The methodology enabled several new insights into system behaviour including quantification of unknown point discharges, identification of key in-stream sources of suspended material and the extent to which biological activity (phytoplankton growth) affects water quality. The costs and benefits of the sampling exercise are reviewed.
AN IMAGE-PLANE ALGORITHM FOR JWST'S NON-REDUNDANT APERTURE MASK DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenbaum, Alexandra Z.; Pueyo, Laurent; Sivaramakrishnan, Anand
2015-01-10
The high angular resolution technique of non-redundant masking (NRM) or aperture masking interferometry (AMI) has yielded images of faint protoplanetary companions of nearby stars from the ground. AMI on James Webb Space Telescope (JWST)'s Near Infrared Imager and Slitless Spectrograph (NIRISS) has a lower thermal background than ground-based facilities and does not suffer from atmospheric instability. NIRISS AMI images are likely to have 90%-95% Strehl ratio between 2.77 and 4.8 μm. In this paper we quantify factors that limit the raw point source contrast of JWST NRM. We develop an analytic model of the NRM point spread function which includesmore » different optical path delays (pistons) between mask holes and fit the model parameters with image plane data. It enables a straightforward way to exclude bad pixels, is suited to limited fields of view, and can incorporate effects such as intra-pixel sensitivity variations. We simulate various sources of noise to estimate their effect on the standard deviation of closure phase, σ{sub CP} (a proxy for binary point source contrast). If σ{sub CP} < 10{sup –4} radians—a contrast ratio of 10 mag—young accreting gas giant planets (e.g., in the nearby Taurus star-forming region) could be imaged with JWST NIRISS. We show the feasibility of using NIRISS' NRM with the sub-Nyquist sampled F277W, which would enable some exoplanet chemistry characterization. In the presence of small piston errors, the dominant sources of closure phase error (depending on pixel sampling, and filter bandwidth) are flat field errors and unmodeled variations in intra-pixel sensitivity. The in-flight stability of NIRISS will determine how well these errors can be calibrated by observing a point source. Our results help develop efficient observing strategies for space-based NRM.« less
Thermal behavior of the Medicina 32-meter radio telescope
NASA Astrophysics Data System (ADS)
Pisanu, Tonino; Buffa, Franco; Morsiani, Marco; Pernechele, Claudio; Poppi, Sergio
2010-07-01
We studied the thermal effects on the 32 m diameter radio-telescope managed by the Institute of Radio Astronomy (IRA), Medicina, Bologna, Italy. The preliminary results show that thermal gradients deteriorate the pointing performance of the antenna. Data has been collected by using: a) two inclinometers mounted near the elevation bearing and on the central part of the alidade structure; b) a non contact laser alignment optical system capable of measuring the secondary mirror position; c) twenty thermal sensors mounted on the alidade trusses. Two series of measurements were made, the first series was performed by placing the antenna in stow position, the second series was performed while tracking a circumpolar astronomical source. When the antenna was in stow position we observed a strong correlation between the inclinometer measurements and the differential temperature. The latter was measured with the sensors located on the South and North sides of the alidade, thus indicating that the inclinometers track well the thermal deformation of the alidade. When the antenna pointed at the source we measured: pointing errors, the inclination of the alidade, the temperature of the alidade components and the subreflector position. The pointing errors measured on-source were 15-20 arcsec greater than those measured with the inclinometer.
A Theorem and its Application to Finite Tampers
DOE R&D Accomplishments Database
Feynman, R. P.
1946-08-15
A theorem is derived which is useful in the analysis of neutron problems in which all neutrons have the same velocity. It is applied to determine extrapolated end-points, the asymptotic amplitude from a point source, and the neutron density at the surface of a medium. Formulas fro the effect of finite tampers are derived by its aid, and their accuracy discussed.
On butterfly effect in higher derivative gravities
NASA Astrophysics Data System (ADS)
Alishahiha, Mohsen; Davody, Ali; Naseh, Ali; Taghavi, Seyed Farid
2016-11-01
We study butterfly effect in D-dimensional gravitational theories containing terms quadratic in Ricci scalar and Ricci tensor. One observes that due to higher order derivatives in the corresponding equations of motion there are two butterfly velocities. The velocities are determined by the dimension of operators whose sources are provided by the metric. The three dimensional TMG model is also studied where we get two butterfly velocities at generic point of the moduli space of parameters. At critical point two velocities coincide.
Design of TIR collimating lens for ordinary differential equation of extended light source
NASA Astrophysics Data System (ADS)
Zhan, Qianjing; Liu, Xiaoqin; Hou, Zaihong; Wu, Yi
2017-10-01
The source of LED has been widely used in our daily life. The intensity angle distribution of single LED is lambert distribution, which does not satisfy the requirement of people. Therefore, we need to distribute light and change the LED's intensity angle distribution. The most commonly method to change its intensity angle distribution is the free surface. Generally, using ordinary differential equations to calculate free surface can only be applied in a point source, but it will lead to a big error for the expand light. This paper proposes a LED collimating lens based on the ordinary differential equation, combined with the LED's light distribution curve, and adopt the method of calculating the center gravity of the extended light to get the normal vector. According to the law of Snell, the ordinary differential equations are constructed. Using the runge-kutta method for solution of ordinary differential equation solution, the curve point coordinates are gotten. Meanwhile, the edge point data of lens are imported into the optical simulation software TracePro. Based on 1mm×1mm single lambert body for light conditions, The degrees of collimating light can be close to +/-3. Furthermore, the energy utilization rate is higher than 85%. In this paper, the point light source is used to calculate partial differential equation method and compared with the simulation of the lens, which improve the effect of 1 degree of collimation.
Measurement of Phased Array Point Spread Functions for Use with Beamforming
NASA Technical Reports Server (NTRS)
Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis
2011-01-01
Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.
NASA Astrophysics Data System (ADS)
Karimi, Khadijeh; Taheri Shahraiyni, Hamid; Habibi Nokhandan, Majid; Hafezi Moghaddas, Naser; Sanaeifar, Melika
2011-11-01
The dust storm happens in the Middle East with very high frequency. According to the dust storm effects, it is vital to study on the dust storms in the Middle East. The first step toward the study on dust storm is the enhancement of dust storms and determination of the point sources. In this paper, a new false color composite (FCC) map for the dust storm enhancement and point sources determination in the Middle East has been developed. The 28 Terra-MODIS images in 2008 and 2009 were utilized in this study. We tried to replace the Red, Green and Blue bands in RGB maps with the bands or maps that enhance the dust storms. Hence, famous indices for dust storm detection (NDDI, D and BTD) were generated using the different bands of MODIS images. These indices with some bands of MODIS were utilized for FCC map generation with different combinations. Among the different combinations, four better FCC maps were selected and these four FCC are compared using visual interpretation. The results of visual interpretations showed that the best FCC map for enhancement of dust storm in the middle east is an especial combination of the three indices (Red: D, Green: BTD and Blue: NDDI). Therefore, we utilized of this new FCC method for the enhancement of dust storms and determination of point sources in Middle East.
Baeza, A; Corbacho, J A; Guillén, J; Salas, A; Mora, J C
2011-05-01
The present work studied the radioacitivity impact of a coal-fired power plant (CFPP), a NORM industry, on the water of the Regallo river which the plant uses for cooling. Downstream, this river passes through an important irrigated farming area, and it is a tributary of the Ebro, one of Spain's largest rivers. Although no alteration of the (210)Po or (232)Th content was detected, the (234,238)U and (226)Ra contents of the water were significantly greater immediately below CFPP's discharge point. The (226)Ra concentration decreased progressively downstream from the discharge point, but the uranium content increased significantly again at two sampling points 8 km downstream from the CFPP's effluent. This suggested the presence of another, unexpected uranium source term different from the CFPP. The input from this second uranium source term was even greater than that from the CFPP. Different hypotheses were tested (a reservoir used for irrigation, remobilization from sediments, and the effect of fertilizers used in the area), with it finally being demonstrated that the source was the fertilizers used in the adjacent farming areas. Copyright © 2011 Elsevier Ltd. All rights reserved.
Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J. Kenneth
2000-10-15
A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.
An evaluation of catchment-scale phosphorus mitigation using load apportionment modelling.
Greene, S; Taylor, D; McElarney, Y R; Foy, R H; Jordan, P
2011-05-01
Functional relationships between phosphorus (P) discharge and concentration mechanisms were explored using a load apportionment model (LAM) developed for use in a freshwater catchment in Ireland with fourteen years of data (1995-2008). The aim of model conceptualisation was to infer changes in point and diffuse sources from catchment P loading during P mitigation, based upon a dataset comprising geospatial and water quality data from a 256km(2) lake catchment in an intensively farmed drumlin region of the midlands of Ireland. The model was calibrated using river total P (TP), molybdate reactive P (MRP) and runoff data from seven subcatchments. Temporal and spatial heterogeneity of P sources existed within and between subcatchments; these were attributed to differences in agricultural intensity, soil type and anthropogenically-sourced effluent P loading. Catchment rivers were sensitive to flow regime, which can result in eutrophication of rivers during summer and lake enrichment from frequent flood events. For one sewage impacted river, the LAM estimated that point sourced P contributed up to of 90% of annual MRP load delivered during a hydrological year and in this river point P sources dominated flows up to 92% of days. In the other rivers, despite diffuse P forming a majority of the annual P exports, point sources of P dominated flows for up to 64% of a hydrological year. The calibrated model demonstrated that lower P export rates followed specific P mitigation measures. The LAM estimated up to 80% decreases in point MRP load after enhanced P removal at waste water treatments plants in urban subcatchments and the implementation of septic tank and agricultural bye-laws in rural subcatchments. The LAM approach provides a way to assess the long-term effectiveness of further measures to reduce P loadings in EU (International) River Basin Districts and subcatchments. Copyright © 2011 Elsevier B.V. All rights reserved.
Hansman, Jan; Mrdja, Dusan; Slivka, Jaroslav; Krmar, Miodrag; Bikit, Istvan
2015-05-01
The activity of environmental samples is usually measured by high resolution HPGe gamma spectrometers. In this work a set-up with a 9in.x9in. NaI well-detector with 3in. thickness and a 3in.×3in. plug detector in a 15-cm-thick lead shielding is considered as an alternative (Hansman, 2014). In spite of its much poorer resolution, it requires shorter measurement times and may possibly give better detection limits. In order to determine the U-238, Th-232, and K-40 content in the samples by this NaI(Tl) detector, the corresponding photopeak efficiencies must be known. These efficiencies can be found for certain source matrix and geometry by Geant4 simulation. We found discrepancy between simulated and experimental efficiencies of 5-50%, which can be mainly due to effects of light collection within the detector volume, an effect which was not taken into account by simulations. The influence of random coincidence summing on detection efficiency for radionuclide activities in the range 130-4000Bq, was negligible. This paper describes also, how the efficiency in the detector depends on the position of the radioactive point source. To avoid large dead time, relatively weak Mn-54, Co-60 and Na-22 point sources of a few kBq were used. Results for single gamma lines and also for coincidence summing gamma lines are presented. Copyright © 2015 Elsevier Ltd. All rights reserved.
Development and Validation of a UAV Based System for Air Pollution Measurements
Villa, Tommaso Francesco; Salimi, Farhad; Morton, Kye; Morawska, Lidia; Gonzalez, Felipe
2016-01-01
Air quality data collection near pollution sources is difficult, particularly when sites are complex, have physical barriers, or are themselves moving. Small Unmanned Aerial Vehicles (UAVs) offer new approaches to air pollution and atmospheric studies. However, there are a number of critical design decisions which need to be made to enable representative data collection, in particular the location of the air sampler or air sensor intake. The aim of this research was to establish the best mounting point for four gas sensors and a Particle Number Concentration (PNC) monitor, onboard a hexacopter, so to develop a UAV system capable of measuring point source emissions. The research included two different tests: (1) evaluate the air flow behavior of a hexacopter, its downwash and upwash effect, by measuring air speed along three axes to determine the location where the sensors should be mounted; (2) evaluate the use of gas sensors for CO2, CO, NO2 and NO, and the PNC monitor (DISCmini) to assess the efficiency and performance of the UAV based system by measuring emissions from a diesel engine. The air speed behavior map produced by test 1 shows the best mounting point for the sensors to be alongside the UAV. This position is less affected by the propeller downwash effect. Test 2 results demonstrated that the UAV propellers cause a dispersion effect shown by the decrease of gas and PN concentration measured in real time. A Linear Regression model was used to estimate how the sensor position, relative to the UAV center, affects pollutant concentration measurements when the propellers are turned on. This research establishes guidelines on how to develop a UAV system to measure point source emissions. Such research should be undertaken before any UAV system is developed for real world data collection. PMID:28009820
Development and Validation of a UAV Based System for Air Pollution Measurements.
Villa, Tommaso Francesco; Salimi, Farhad; Morton, Kye; Morawska, Lidia; Gonzalez, Felipe
2016-12-21
Air quality data collection near pollution sources is difficult, particularly when sites are complex, have physical barriers, or are themselves moving. Small Unmanned Aerial Vehicles (UAVs) offer new approaches to air pollution and atmospheric studies. However, there are a number of critical design decisions which need to be made to enable representative data collection, in particular the location of the air sampler or air sensor intake. The aim of this research was to establish the best mounting point for four gas sensors and a Particle Number Concentration (PNC) monitor, onboard a hexacopter, so to develop a UAV system capable of measuring point source emissions. The research included two different tests: (1) evaluate the air flow behavior of a hexacopter, its downwash and upwash effect, by measuring air speed along three axes to determine the location where the sensors should be mounted; (2) evaluate the use of gas sensors for CO₂, CO, NO₂ and NO, and the PNC monitor (DISCmini) to assess the efficiency and performance of the UAV based system by measuring emissions from a diesel engine. The air speed behavior map produced by test 1 shows the best mounting point for the sensors to be alongside the UAV. This position is less affected by the propeller downwash effect. Test 2 results demonstrated that the UAV propellers cause a dispersion effect shown by the decrease of gas and PN concentration measured in real time. A Linear Regression model was used to estimate how the sensor position, relative to the UAV center, affects pollutant concentration measurements when the propellers are turned on. This research establishes guidelines on how to develop a UAV system to measure point source emissions. Such research should be undertaken before any UAV system is developed for real world data collection.
Convex relaxations of spectral sparsity for robust super-resolution and line spectrum estimation
NASA Astrophysics Data System (ADS)
Chi, Yuejie
2017-08-01
We consider recovering the amplitudes and locations of spikes in a point source signal from its low-pass spectrum that may suffer from missing data and arbitrary outliers. We first review and provide a unified view of several recently proposed convex relaxations that characterize and capitalize the spectral sparsity of the point source signal without discretization under the framework of atomic norms. Next we propose a new algorithm when the spikes are known a priori to be positive, motivated by applications such as neural spike sorting and fluorescence microscopy imaging. Numerical experiments are provided to demonstrate the effectiveness of the proposed approach.
Environmental monitoring of Galway Bay: fusing data from remote and in-situ sources
NASA Astrophysics Data System (ADS)
O'Connor, Edel; Hayes, Jer; Smeaton, Alan F.; O'Connor, Noel E.; Diamond, Dermot
2009-09-01
Changes in sea surface temperature can be used as an indicator of water quality. In-situ sensors are being used for continuous autonomous monitoring. However these sensors have limited spatial resolution as they are in effect single point sensors. Satellite remote sensing can be used to provide better spatial coverage at good temporal scales. However in-situ sensors have a richer temporal scale for a particular point of interest. Work carried out in Galway Bay has combined data from multiple satellite sources and in-situ sensors and investigated the benefits and drawbacks of using multiple sensing modalities for monitoring a marine location.
Wu, Hao; Zhang, Yan; Yu, Qi; Ma, Weichun
2018-04-01
In this study, the authors endeavored to develop an effective framework for improving local urban air quality on meso-micro scales in cities in China that are experiencing rapid urbanization. Within this framework, the integrated Weather Research and Forecasting (WRF)/CALPUFF modeling system was applied to simulate the concentration distributions of typical pollutants (particulate matter with an aerodynamic diameter <10 μm [PM 10 ], sulfur dioxide [SO 2 ], and nitrogen oxides [NO x ]) in the urban area of Benxi. Statistical analyses were performed to verify the credibility of this simulation, including the meteorological fields and concentration fields. The sources were then categorized using two different classification methods (the district-based and type-based methods), and the contributions to the pollutant concentrations from each source category were computed to provide a basis for appropriate control measures. The statistical indexes showed that CALMET had sufficient ability to predict the meteorological conditions, such as the wind fields and temperatures, which provided meteorological data for the subsequent CALPUFF run. The simulated concentrations from CALPUFF showed considerable agreement with the observed values but were generally underestimated. The spatial-temporal concentration pattern revealed that the maximum concentrations tended to appear in the urban centers and during the winter. In terms of their contributions to pollutant concentrations, the districts of Xihu, Pingshan, and Mingshan all affected the urban air quality to different degrees. According to the type-based classification, which categorized the pollution sources as belonging to the Bengang Group, large point sources, small point sources, and area sources, the source apportionment showed that the Bengang Group, the large point sources, and the area sources had considerable impacts on urban air quality. Finally, combined with the industrial characteristics, detailed control measures were proposed with which local policy makers could improve the urban air quality in Benxi. In summary, the results of this study showed that this framework has credibility for effectively improving urban air quality, based on the source apportionment of atmospheric pollutants. The authors endeavored to build up an effective framework based on the integrated WRF/CALPUFF to improve the air quality in many cities on meso-micro scales in China. Via this framework, the integrated modeling tool is accurately used to study the characteristics of meteorological fields, concentration fields, and source apportionments of pollutants in target area. The impacts of classified sources on air quality together with the industrial characteristics can provide more effective control measures for improving air quality. Through the case study, the technical framework developed in this study, particularly the source apportionment, could provide important data and technical support for policy makers to assess air pollution on the scale of a city in China or even the world.
Ghost imaging with bucket detection and point detection
NASA Astrophysics Data System (ADS)
Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao
2018-04-01
We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
Magner, J A; Brooks, K N
2008-03-01
Section 303(d) of the Clean Water Act requires States and Tribes to list waters not meeting water quality standards. A total maximum daily load must be prepared for waters identified as impaired with respect to water quality standards. Historically, the management of pollution in Minnesota has been focused on point-source regulation. Regulatory effort in Minnesota has improved water quality over the last three decades. Non-point source pollution has become the largest driver of conventional 303(d) listings in the 21st century. Conventional pollutants, i.e., organic, sediment and nutrient imbalances can be identified with poor land use management practices. However, the cause and effect relationship can be elusive because of natural watershed-system influences that vary with scale. Elucidation is complex because the current water quality standards in Minnesota were designed to work best with water quality permits to control point sources of pollution. This paper presents a sentinel watershed-systems approach (SWSA) to the monitoring and assessment of Minnesota waterbodies. SWSA integrates physical, chemical, and biological data over space and time using advanced technologies at selected small watersheds across Minnesota to potentially improve understanding of natural and anthropogenic watershed processes and the management of point and non-point sources of pollution. Long-term, state-of-the-art monitoring and assessment is needed to advance and improve water quality standards. Advanced water quality or ecologically-based standards that integrate physical, chemical, and biological numeric criteria offer the potential to better understand, manage, protect, and restore Minnesota's waterbodies.
Dong, Yang; Liu, Yi; Chen, Jining
2014-01-01
Urban expansion is a major driving force changing regional hydrology and nonpoint source pollution. The Haihe River Basin, the political, economic, and cultural center of northeastern China, has undergone rapid urbanization in recent decades. To investigate the consequences of future urban sprawl on nonpoint source water pollutant emissions in the river basin, the urban sprawl in 2030 was estimated, and the annual runoff and nonpoint source pollution in the Haihe River basin were simulated. The Integrated Model of Non-Point Sources Pollution Processes (IMPULSE) was used to simulate the effects of urban sprawl on nonpoint source pollution emissions. The outcomes indicated that the urban expansion through 2030 increased the nonpoint source total nitrogen (TN), total phosphorous (TP), and chemical oxygen demand (COD) emissions by 8.08, 0.14, and 149.57 kg/km(2), respectively. Compared to 2008, the total nonpoint emissions rose by 15.33, 0.57, and 12.39 %, respectively. Twelve percent of the 25 cities in the basin would increase by more than 50 % in nonpoint source TN and COD emissions in 2030. In particular, the nonpoint source TN emissions in Xinxiang, Jiaozuo, and Puyang would rise by 73.31, 67.25, and 58.61 %, and the nonpoint source COD emissions in these cities would rise by 74.02, 51.99, and 53.27 %, respectively. The point source pollution emissions in 2008 and 2030 were also estimated to explore the effects of urban sprawl on total water pollution loads. Urban sprawl through 2030 would bring significant structural changes of total TN, TP, and COD emissions for each city in the area. The results of this study could provide insights into the effects of urbanization in the study area and the methods could help to recognize the role that future urban sprawl plays in the total water pollution loads in the water quality management process.
NutrientNet: An Internet-Based Approach to Teaching Market-Based Policy for Environmental Management
ERIC Educational Resources Information Center
Nguyen, To N.; Woodward, Richard T.
2009-01-01
NutrientNet is an Internet-based environment in which a class can simulate a market-based approach for improving water quality. In NutrientNet, each student receives a role as either a point source or a nonpoint source polluter, and then the participants are allowed to trade water quality credits to cost-effectively reduce pollution in a…
NASA Astrophysics Data System (ADS)
Marturano, Aldo
2008-11-01
Historical sources have recorded earthquake shocks, their effects and difficulties that local inhabitants experienced before the AD 79 Pompeii eruption. Archaeological studies pointed out the effects of such seismicity, and have also evidenced that several water crises were occurring at Pompeii in that period. Indeed numerous sources show that, at the time of eruption, and probably some time before, the civic aqueduct, having ceased to be supplied by the regional one, was out of order and that a new one was being built. Since Roman aqueducts were usually built with a recommended minimum mean slope of 20 cm/km and Pompeii's aqueduct sloped from the nearby Apennines toward the town, this slope could have been easily cancelled by uplift that occurred in the area even if this was only moderate. For the crustal deformations a volcanic origin is proposed and a point source model is used to explain the observations. Simple analysis of the available data suggests that the ground deformations were caused by a < 2 km 3 volumetric change at a depth of ˜ 8 km that happened over the course of several decades.
MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID
Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...
Can earthquake source inversion benefit from rotational ground motion observations?
NASA Astrophysics Data System (ADS)
Igel, H.; Donner, S.; Reinwald, M.; Bernauer, M.; Wassermann, J. M.; Fichtner, A.
2015-12-01
With the prospects of instruments to observe rotational ground motions in a wide frequency and amplitude range in the near future we engage in the question how this type of ground motion observation can be used to solve seismic inverse problems. Here, we focus on the question, whether point or finite source inversions can benefit from additional observations of rotational motions. In an attempt to be fair we compare observations from a surface seismic network with N 3-component translational sensors (classic seismometers) with those obtained with N/2 6-component sensors (with additional colocated 3-component rotational motions). Thus we keep the overall number of traces constant. Synthetic seismograms are calculated for known point- or finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content as a measure how the observations constrain the seismic source properties. The results show that with the 6-C subnetworks the source properties are not only equally well recovered (even that would be benefitial because of the substantially reduced logistics installing N/2 sensors) but statistically significant some source properties are almost always better resolved. We assume that this can be attributed to the fact the (in particular vertical) gradient information is contained in the additional rotational motion components. We compare these effects for strike-slip and normal-faulting type sources. Thus the answer to the question raised is a definite "yes". The challenge now is to demonstrate these effects on real data.
An Improved Statistical Point-source Foreground Model for the Epoch of Reionization
NASA Astrophysics Data System (ADS)
Murray, S. G.; Trott, C. M.; Jordan, C. H.
2017-08-01
We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.
2001-12-01
In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.
Feaster, Toby D.; Conrads, Paul; Guimaraes, Wladmir B.; Sanders, Curtis L.; Bales, Jerad D.
2003-01-01
Time-series plots of dissolved-oxygen concentrations were determined for various simulated hydrologic and point-source loading conditions along a free-flowing section of the Catawba River from Lake Wylie Dam to the headwaters of Fishing Creek Reservoir in South Carolina. The U.S. Geological Survey one-dimensional dynamic-flow model, BRANCH, was used to simulate hydrodynamic data for the Branched Lagrangian Transport Model. Waterquality data were used to calibrate the Branched Lagrangian Transport Model and included concentrations of nutrients, chlorophyll a, and biochemical oxygen demand in water samples collected during two synoptic sampling surveys at 10 sites along the main stem of the Catawba River and at 3 tributaries; and continuous water temperature and dissolved-oxygen concentrations measured at 5 locations along the main stem of the Catawba River. A sensitivity analysis of the simulated dissolved-oxygen concentrations to model coefficients and data inputs indicated that the simulated dissolved-oxygen concentrations were most sensitive to watertemperature boundary data due to the effect of temperature on reaction kinetics and the solubility of dissolved oxygen. Of the model coefficients, the simulated dissolved-oxygen concentration was most sensitive to the biological oxidation rate of nitrite to nitrate. To demonstrate the utility of the Branched Lagrangian Transport Model for the Catawba River, the model was used to simulate several water-quality scenarios to evaluate the effect on the 24-hour mean dissolved-oxygen concentrations at selected sites for August 24, 1996, as simulated during the model calibration period of August 23 27, 1996. The first scenario included three loading conditions of the major effluent discharges along the main stem of the Catawba River (1) current load (as sampled in August 1996); (2) no load (all point-source loads were removed from the main stem of the Catawba River; loads from the main tributaries were not removed); and (3) fully loaded (in accordance with South Carolina Department of Health and Environmental Control National Discharge Elimination System permits). Results indicate that the 24-hour mean and minimum dissolved-oxygen concentrations for August 24, 1996, changed from the no-load condition within a range of - 0.33 to 0.02 milligram per liter and - 0.48 to 0.00 milligram per liter, respectively. Fully permitted loading conditions changed the 24-hour mean and minimum dissolved-oxygen concentrations from - 0.88 to 0.04 milligram per liter and - 1.04 to 0.00 milligram per liter, respectively. A second scenario included the addition of a point-source discharge of 25 million gallons per day to the August 1996 calibration conditions. The discharge was added at S.C. Highway 5 or at a location near Culp Island (about 4 miles downstream from S.C. Highway 5) and had no significant effect on the daily mean and minimum dissolved-oxygen concentration. A third scenario evaluated the phosphorus loading into Fishing Creek Reservoir; four loading conditions of phosphorus into Catawba River were simulated. The four conditions included fully permitted and actual loading conditions, removal of all point sources from the Catawba River, and removal of all point and nonpoint sources from Sugar Creek. Removing the point-source inputs on the Catawba River and the point and nonpoint sources in Sugar Creek reduced the organic phosphorus and orthophosphate loadings to Fishing Creek Reservoir by 78 and 85 percent, respectively.
NASA Astrophysics Data System (ADS)
Keisman, J.; Sekellick, A.; Blomquist, J.; Devereux, O. H.; Hively, W. D.; Johnston, M.; Moyer, D.; Sweeney, J.
2014-12-01
Chesapeake Bay is a eutrophic ecosystem with periodic hypoxia and anoxia, algal blooms, diminished submerged aquatic vegetation, and degraded stocks of marine life. Knowledge of the effectiveness of actions taken across the watershed to reduce nitrogen (N) and phosphorus (P) loads to the bay (i.e. "best management practices" or BMPs) is essential to its restoration. While nutrient inputs from point sources (e.g. wastewater treatment plants and other industrial and municipal operations) are tracked, inputs from nonpoint sources, including atmospheric deposition, farms, lawns, septic systems, and stormwater, are difficult to measure. Estimating reductions in nonpoint source inputs attributable to BMPs requires compilation and comparison of data on water quality, climate, land use, point source discharges, and BMP implementation. To explore the relation of changes in nonpoint source inputs and BMP implementation to changes in water quality, a subset of small watersheds (those containing at least 10 years of water quality monitoring data) within the Chesapeake Watershed were selected for study. For these watersheds, data were compiled on geomorphology, demographics, land use, point source discharges, atmospheric deposition, and agricultural practices such as livestock populations, crop acres, and manure and fertilizer application. In addition, data on BMP implementation for 1985-2012 were provided by the Environmental Protection Agency Chesapeake Bay Program Office (CBPO) and the U.S. Department of Agriculture. A spatially referenced nonlinear regression model (SPARROW) provided estimates attributing N and P loads associated with receiving waters to different nutrient sources. A recently developed multiple regression technique ("Weighted Regressions on Time, Discharge and Season" or WRTDS) provided an enhanced understanding of long-term trends in N and P loads and concentrations. A suite of deterministic models developed by the CBPO was used to estimate expected nutrient load reductions attributable to BMPs. Further quantification of the relation of land-based nutrient sources and BMPs to water quality in the bay and its tributaries must account for inconsistency in BMP data over time and uncertainty regarding BMP locations and effectiveness.
NASA Astrophysics Data System (ADS)
Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.
2017-07-01
Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.
NASA Astrophysics Data System (ADS)
Jeffery, David J.; Mazzali, Paolo A.
2007-08-01
Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.
Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.
Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe
2010-01-01
There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
BIOLOGICAL EFFECTS OF UTAH VALLEY PARTICLES: A REVIEW
The Utah Valley provided a unique opportunity to evaluate the health effects of particulate matter (PM) in humans. The area has had intermittently high particle levels with the principal point source being a steel mill. Due to a labor dispute, the mill was shut down. The closu...
NASA Technical Reports Server (NTRS)
Maskew, B.
1976-01-01
A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).
NASA Astrophysics Data System (ADS)
Konca, A. O.; Ji, C.; Helmberger, D. V.
2004-12-01
We observed the effect of the fault finiteness in the Pnl waveforms from regional distances (4° to 12° ) for the Mw6.5 San Simeon Earthquake on 22 December 2003. We aimed to include more of the high frequencies (2 seconds and longer periods) than the studies that use regional data for focal solutions (5 to 8 seconds and longer periods). We calculated 1-D synthetic seismograms for the Pn_l portion for both a point source, and a finite fault solution. The comparison of the point source and finite fault waveforms with data show that the first several seconds of the point source synthetics have considerably higher amplitude than the data, while finite fault does not have a similar problem. This can be explained by reversely polarized depth phases overlapping with the P waves from the later portion of the fault, and causing smaller amplitudes for the beginning portion of the seismogram. This is clearly a finite fault phenomenon; therefore, can not be explained by point source calculations. Moreover, the point source synthetics, which are calculated with a focal solution from a long period regional inversion, are overestimating the amplitude by three to four times relative to the data amplitude, while finite fault waveforms have the similar amplitudes to the data. Hence, a moment estimation based only on the point source solution of the regional data could have been wrong by half of magnitude. We have also calculated the shifts of synthetics relative to data to fit the seismograms. Our results reveal that the paths from Central California to the south are faster than to the paths to the east and north. The P wave arrival to the TUC station in Arizona is 4 seconds earlier than the predicted Southern California model, while most stations to the east are delayed around 1 second. The observed higher uppermost mantle velocities to the south are consistent with some recent tomographic models. Synthetics generated with these models significantly improves the fits and the timing at most stations. This means that regional waveform data can be used to help locate and establish source complexities for future events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
NASA Technical Reports Server (NTRS)
Miles, J. H.; Wasserbauer, C. A.; Krejsa, E. A.
1983-01-01
Pressure temperature cross spectra are necessary in predicting noise propagation in regions of velocity gradients downstream of combustors if the effect of convective entropy disturbances is included. Pressure temperature cross spectra and coherences were measured at spatially separated points in a combustion rig fueled with hydrogen. Temperature-temperature and pressure-pressure cross spectra and coherences between the spatially separated points as well as temperature and pressure autospectra were measured. These test results were compared with previous results obtained in the same combustion rig using Jet A fuel in order to investigate their dependence on the type of combustion process. The phase relationships are not consistent with a simple source model that assumes that pressure and temperature are in phase at a point in the combustor and at all other points downstream are related to one another by only a time delay due to convection of temperature disturbances. Thus these test results indicate that a more complex model of the source is required.
Medium effect on the characteristics of the coupled seismic and electromagnetic signals.
Huang, Qinghua; Ren, Hengxin; Zhang, Dan; Chen, Y John
2015-01-01
Recently developed numerical simulation technique can simulate the coupled seismic and electromagnetic signals for a double couple point source or a finite fault planar source. Besides the source effect, the simulation results showed that both medium structure and medium property could affect the coupled seismic and electromagnetic signals. The waveform of coupled signals for a layered structure is more complicated than that for a simple uniform structure. Different from the seismic signals, the electromagnetic signals are sensitive to the medium properties such as fluid salinity and fluid viscosity. Therefore, the co-seismic electromagnetic signals may be more informative than seismic signals.
Medium effect on the characteristics of the coupled seismic and electromagnetic signals
HUANG, Qinghua; REN, Hengxin; ZHANG, Dan; CHEN, Y. John
2015-01-01
Recently developed numerical simulation technique can simulate the coupled seismic and electromagnetic signals for a double couple point source or a finite fault planar source. Besides the source effect, the simulation results showed that both medium structure and medium property could affect the coupled seismic and electromagnetic signals. The waveform of coupled signals for a layered structure is more complicated than that for a simple uniform structure. Different from the seismic signals, the electromagnetic signals are sensitive to the medium properties such as fluid salinity and fluid viscosity. Therefore, the co-seismic electromagnetic signals may be more informative than seismic signals. PMID:25743062
An empirical formula to calculate the full energy peak efficiency of scintillation detectors.
Badawi, Mohamed S; Abd-Elzaher, Mohamed; Thabet, Abouzeid A; El-khatib, Ahmed M
2013-04-01
This work provides an empirical formula to calculate the FEPE for different detectors using the effective solid angle ratio derived from experimental measurements. The full energy peak efficiency (FEPE) curves of the (2″(*)2″) NaI(Tl) detector at different seven axial distances from the detector were depicted in a wide energy range from 59.53 to 1408keV using standard point sources. The distinction was based on the effects of the source energy and the source-to-detector distance. A good agreement was noticed between the measured and calculated efficiency values for the source-to-detector distances at 20, 25, 30, 35, 40, 45 and 50cm. Copyright © 2012 Elsevier Ltd. All rights reserved.
Compound simulator IR radiation characteristics test and calibration
NASA Astrophysics Data System (ADS)
Li, Yanhong; Zhang, Li; Li, Fan; Tian, Yi; Yang, Yang; Li, Zhuo; Shi, Rui
2015-10-01
The Hardware-in-the-loop simulation can establish the target/interference physical radiation and interception of product flight process in the testing room. In particular, the simulation of environment is more difficult for high radiation energy and complicated interference model. Here the development in IR scene generation produced by a fiber array imaging transducer with circumferential lamp spot sources is introduced. The IR simulation capability includes effective simulation of aircraft signatures and point-source IR countermeasures. Two point-sources as interference can move in two-dimension random directions. For simulation the process of interference release, the radiation and motion characteristic is tested. Through the zero calibration for optical axis of simulator, the radiation can be well projected to the product detector. The test and calibration results show the new type compound simulator can be used in the hardware-in-the-loop simulation trial.
Classifying bent radio galaxies from a mixture of point-like/extended images with Machine Learning.
NASA Astrophysics Data System (ADS)
Bastien, David; Oozeer, Nadeem; Somanah, Radhakrishna
2017-05-01
The hypothesis that bent radio sources are supposed to be found in rich, massive galaxy clusters and the avalibility of huge amount of data from radio surveys have fueled our motivation to use Machine Learning (ML) to identify bent radio sources and as such use them as tracers for galaxy clusters. The shapelet analysis allowed us to decompose radio images into 256 features that could be fed into the ML algorithm. Additionally, ideas from the field of neuro-psychology helped us to consider training the machine to identify bent galaxies at different orientations. From our analysis, we found that the Random Forest algorithm was the most effective with an accuracy rate of 92% for a classification of point and extended sources as well as an accuracy of 80% for bent and unbent classification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
User's guide for RAM. Volume II. Data preparation and listings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D.B.; Novak, J.H.
1978-11-01
The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less
Methane - quick fix or tough target? New methods to reduce emissions.
NASA Astrophysics Data System (ADS)
Nisbet, E. G.; Lowry, D.; Fisher, R. E.; Brownlow, R.
2016-12-01
Methane is a cost-effective target for greenhouse gas reduction efforts. The UK's MOYA project is designed to improve understanding of the global methane budget and to point to new methods to reduce future emissions. Since 2007, methane has been increasing rapidly: in 2014 and 2015 growth was at rates last seen in the 1980s. Unlike 20thcentury growth, primarily driven by fossil fuel emissions in northern industrial nations, isotopic evidence implies present growth is driven by tropical biogenic sources such as wetlands and agriculture. Discovering why methane is rising is important. Schaefer et al. (Science, 2016) pointed out the potential clash between methane reduction efforts and food needs of a rising, better-fed (physically larger) human population. Our own work suggests tropical wetlands are major drivers of growth, responding to weather changes since 2007, but there is no acceptable way to reduce wetland emission. Just as sea ice decline indicates Arctic warming, methane may be the most obvious tracker of climate change in the wet tropics. Technical advances in instrumentation can do much in helping cut urban and industrial methane emissions. Mobile systems can be mounted on vehicles, while drone sampling can provide a 3D view to locate sources. Urban land planning often means large but different point sources are typically clustered (e.g. landfill or sewage plant near incinerator; gas wells next to cattle). High-precision grab-sample isotopic characterisation, using Keeling plots, can separate source signals, to identify specific emitters, even where they are closely juxtaposed. Our mobile campaigns in the UK, Kuwait, Hong Kong and E. Australia show the importance of major single sources, such as abandoned old wells, pipe leaks, or unregulated landfills. If such point sources can be individually identified, even when clustered, they will allow effective reduction efforts to occur: these can be profitable and/or improve industrial safety, for example in the case of gas leaks. Fossil fuels, landfills, waste, and biomass burning emit about 200 Tg/yr, or 35-40% of global methane emissions. Using inexpensive 3D mobile surveys coupled with high-precision isotopic measurement, it should be possible to cut emissions sharply, substantially reducing the methane burden even if tropical biogenic sources increase.
Oxidative potential and inflammatory impacts of source apportioned ambient air pollution in Beijing.
Liu, Qingyang; Baumgartner, Jill; Zhang, Yuanxun; Liu, Yanju; Sun, Yongjun; Zhang, Meigen
2014-11-04
Air pollution exposure is associated with a range of adverse health impacts. Knowledge of the chemical components and sources of air pollution most responsible for these health effects could lead to an improved understanding of the mechanisms of such effects and more targeted risk reduction strategies. We measured daily ambient fine particulate matter (<2.5 μm in aerodynamic diameter; PM2.5) for 2 months in peri-urban and central Beijing, and assessed the contribution of its chemical components to the oxidative potential of ambient air pollution using the dithiothreitol (DTT) assay. The composition data were applied to a multivariate source apportionment model to determine the PM contributions of six sources or factors: a zinc factor, an aluminum factor, a lead point factor, a secondary source (e.g., SO4(2-), NO3(2-)), an iron source, and a soil dust source. Finally, we assessed the relationship between reactive oxygen species (ROS) activity-related PM sources and inflammatory responses in human bronchial epithelial cells. In peri-urban Beijing, the soil dust source accounted for the largest fraction (47%) of measured ROS variability. In central Beijing, a secondary source explained the greatest fraction (29%) of measured ROS variability. The ROS activities of PM collected in central Beijing were exponentially associated with in vivo inflammatory responses in epithelial cells (R2=0.65-0.89). We also observed a high correlation between three ROS-related PM sources (a lead point factor, a zinc factor, and a secondary source) and expression of an inflammatory marker (r=0.45-0.80). Our results suggest large differences in the contribution of different PM sources to ROS variability at the central versus peri-urban study sites in Beijing and that secondary sources may play an important role in PM2.5-related oxidative potential and inflammatory health impacts.
Li, Li-Guan; Yin, Xiaole; Zhang, Tong
2018-05-24
Antimicrobial resistance (AMR) has been a worldwide public health concern. Current widespread AMR pollution has posed a big challenge in accurately disentangling source-sink relationship, which has been further confounded by point and non-point sources, as well as endogenous and exogenous cross-reactivity under complicated environmental conditions. Because of insufficient capability in identifying source-sink relationship within a quantitative framework, traditional antibiotic resistance gene (ARG) signatures-based source-tracking methods would hardly be a practical solution. By combining broad-spectrum ARG profiling with machine-learning classification SourceTracker, here we present a novel way to address the question in the era of high-throughput sequencing. Its potential in extensive application was firstly validated by 656 global-scale samples covering diverse environmental types (e.g., human/animal gut, wastewater, soil, ocean) and broad geographical regions (e.g., China, USA, Europe, Peru). Its potential and limitations in source prediction as well as effect of parameter adjustment were then rigorously evaluated by artificial configurations with representative source proportions. When applying SourceTracker in region-specific analysis, excellent performance was achieved by ARG profiles in two sample types with obvious different source compositions, i.e., influent and effluent of wastewater treatment plant. Two environmental metagenomic datasets of anthropogenic interference gradient further supported its potential in practical application. To complement general-profile-based source tracking in distinguishing continuous gradient pollution, a few generalist and specialist indicator ARGs across ecotypes were identified in this study. We demonstrated for the first time that the developed source-tracking platform when coupling with proper experiment design and efficient metagenomic analysis tools will have significant implications for assessing AMR pollution. Following predicted source contribution status, risk ranking of different sources in ARG dissemination will be possible, thereby paving the way for establishing priority in mitigating ARG spread and designing effective control strategies.
NASA Technical Reports Server (NTRS)
Weistrop, D.; Shaffer, D. B.; Mushotzky, R. F.; Reitsma, H. J.; Smith, B. A.
1981-01-01
Visual and far red surface photometry were obtained of two X-ray emitting BL Lacertae objects, 1218+304 (2A1219+305) and 1727+50 (Izw 187), as well as the highly variable object 1219+28 (ON 231, W Com). The intensity distribution for 1727+50 can be modeled using a central point source plus a de Vaucouleurs intensity law for an underlying galaxy. The broad band spectral energy distribution so derived is consistent with what is expected for an elliptical galaxy. The spectral index of the point source is alpha = 0.97. Additional VLBI and X-ray data are also reported for 1727+50. There is nebulosity associated with the recently discovered object 1218+304. No nebulosity is found associated with 1219+28. A comparison of the results with observations at X-ray and radio frequencies suggests that all the emission from 1727+50 and 1218+304 can be interpreted as due solely to direct synchrotron emission. If this is the case, the data further imply the existence of relativistic motion effects and continuous particle injection.
NASA Astrophysics Data System (ADS)
Halloran, Siobhan; Ristenpart, William
2013-11-01
Virologists and other researchers who test pathogens for airborne disease transmissibility often place a test animal downstream from an inoculated animal and later determine whether the test animal became infected. Despite the crucial role of the airflow in pathogen transmission between the animals, to date the infectious disease community has paid little attention to the effect of airspeed or turbulent intensity on the probability of transmission. Here we present measurements of the turbulent dispersivity under conditions relevant to experimental tests of airborne disease transmissibility between laboratory animals. We used time lapse photography to visualize the downstream transport and turbulent dispersion of smoke particulates released from a point source downstream of an axial fan, thus mimicking the release and transport of expiratory aerosols exhaled by an inoculated animal. We show that for fan-generated turbulence the plume width is invariant with the mean airspeed and, close to the point source, increases linearly with downstream position. Importantly, the turbulent dispersivity is insensitive to the presence of meshes placed downstream from the point source, indicating that the fan length scale dictates the turbulent intensity and corresponding dispersivity.
A program to calculate pulse transmission responses through transversely isotropic media
NASA Astrophysics Data System (ADS)
Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei
2018-05-01
We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.
NASA Astrophysics Data System (ADS)
Mahanthesh, B.; Gireesha, B. J.; Shashikumar, N. S.; Hayat, T.; Alsaedi, A.
2018-06-01
Present work aims to investigate the features of the exponential space dependent heat source (ESHS) and cross-diffusion effects in Marangoni convective heat mass transfer flow due to an infinite disk. Flow analysis is comprised with magnetohydrodynamics (MHD). The effects of Joule heating, viscous dissipation and solar radiation are also utilized. The thermal and solute field on the disk surface varies in a quadratic manner. The ordinary differential equations have been obtained by utilizing Von Kármán transformations. The resulting problem under consideration is solved numerically via Runge-Kutta-Fehlberg based shooting scheme. The effects of involved pertinent flow parameters are explored by graphical illustrations. Results point out that the ESHS effect dominates thermal dependent heat source effect on thermal boundary layer growth. The concentration and temperature distributions and their associated layer thicknesses are enhanced by Marangoni effect.
NASA Astrophysics Data System (ADS)
Mishra, Neha; Sriram Kumar, D.; Jha, Pranav Kumar
2017-06-01
In this paper, we investigate the performance of the dual-hop free space optical (FSO) communication systems under the effect of strong atmospheric turbulence together with misalignment effects (pointing error). We consider a relay assisted link using decode and forward (DF) relaying protocol between source and destination with the assumption that Channel State Information is available at both transmitting and receiving terminals. The atmospheric turbulence channels are modeled by k-distribution with pointing error impairment. The exact closed form expression is derived for outage probability and bit error rate and illustrated through numerical plots. Further BER results are compared for the different modulation schemes.
NASA Astrophysics Data System (ADS)
Vink, Rona; Behrendt, Horst
2002-11-01
Pollutant transport and management in the Rhine and Elbe basins is still of international concern, since certain target levels set by the international committees for protection of both rivers have not been reached. The analysis of the chain of emissions of point and diffuse sources to river loads will provide policy makers with a tool for effective management of river basins. The analysis of large river basins such as the Elbe and Rhine requires information on the spatial and temporal characteristics of both emissions and physical information of the entire river basin. In this paper, an analysis has been made of heavy metal emissions from various point and diffuse sources in the Rhine and Elbe drainage areas. Different point and diffuse pathways are considered in the model, such as inputs from industry, wastewater treatment plants, urban areas, erosion, groundwater, atmospheric deposition, tile drainage, and runoff. In most cases the measured heavy metal loads at monitoring stations are lower than the sum of the heavy metal emissions. This behaviour in large river systems can largely be explained by retention processes (e.g. sedimentation) and is dependent on the specific runoff of a catchment. Independent of the method used to estimate emissions, the source apportionment analysis of observed loads was used to determine the share of point and diffuse sources in the heavy metal load at a monitoring station by establishing a discharge dependency. The results from both the emission analysis and the source apportionment analysis of observed loads were compared and gave similar results. Between 51% (for Hg) and 74% (for Pb) of the total transport in the Elbe basin is supplied by inputs from diffuse sources. In the Rhine basin diffuse source inputs dominate the total transport and deliver more than 70% of the total transport. The diffuse hydrological pathways with the highest share are erosion and urban areas.
VizieR Online Data Catalog: Polarized point sources in LOTSS-HETDEX (Van Eck+, 2018)
NASA Astrophysics Data System (ADS)
van Eck, C. L.; Haverkorn, M.; Alves, M. I. R.; Beck, R.; Best, P.; Carretti, E.; Chyzy, K. T.; Farnes, J. S.; Ferriere, K.; Hardcastle, M. J.; Heald, G.; Horellou, C.; Iacobelli, M.; Jelic, V.; Mulcahy, D. D.; O'Sullivan, S. P.; Polderman, I. M.; Reich, W.; Riseley, C. J.; Rottgering, H.; Schnitzeler, D. H. F. M.; Shimwell, T. W.; Vacca, V.; Vink, J.; White, G. J.
2018-06-01
Visibility data taken from LOTSS, imaged in polarization, and had RM synthesis applied. Resulting RM spectra were searched for polarization peaks. Detected peaks that were determined to not be foreground or instrumental effects were collected in this catalog. Source locations (for peak searches) were selected from TGSS-ADR1 (J/A+A/598/A78). Due to overlap between fields, some sources were detected multiple times, as recorded in the Ndet column. Polarized sources were cross-matched with the high-resolution LOTSS images (Shimwell+, in prep), and WISE and PanSTARRS images, which were used to determine the source classification and morphology. (1 data file).
Point and Condensed Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian
2017-01-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.
A clustering algorithm for sample data based on environmental pollution characteristics
NASA Astrophysics Data System (ADS)
Chen, Mei; Wang, Pengfei; Chen, Qiang; Wu, Jiadong; Chen, Xiaoyun
2015-04-01
Environmental pollution has become an issue of serious international concern in recent years. Among the receptor-oriented pollution models, CMB, PMF, UNMIX, and PCA are widely used as source apportionment models. To improve the accuracy of source apportionment and classify the sample data for these models, this study proposes an easy-to-use, high-dimensional EPC algorithm that not only organizes all of the sample data into different groups according to the similarities in pollution characteristics such as pollution sources and concentrations but also simultaneously detects outliers. The main clustering process consists of selecting the first unlabelled point as the cluster centre, then assigning each data point in the sample dataset to its most similar cluster centre according to both the user-defined threshold and the value of similarity function in each iteration, and finally modifying the clusters using a method similar to k-Means. The validity and accuracy of the algorithm are tested using both real and synthetic datasets, which makes the EPC algorithm practical and effective for appropriately classifying sample data for source apportionment models and helpful for better understanding and interpreting the sources of pollution.
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
NASA Astrophysics Data System (ADS)
Nozu, A.
2013-12-01
A new simplified source model is proposed to explain strong ground motions from a mega-thrust earthquake. The proposed model is simpler, and involves less model parameters, than the conventional characterized source model, which itself is a simplified expression of actual earthquake source. In the proposed model, the spacio-temporal distribution of slip within a subevent is not modeled. Instead, the source spectrum associated with the rupture of a subevent is modeled and it is assumed to follow the omega-square model. By multiplying the source spectrum with the path effect and the site amplification factor, the Fourier amplitude at a target site can be obtained. Then, combining it with Fourier phase characteristics of a smaller event, the time history of strong ground motions from the subevent can be calculated. Finally, by summing up contributions from the subevents, strong ground motions from the entire rupture can be obtained. The source model consists of six parameters for each subevent, namely, longitude, latitude, depth, rupture time, seismic moment and corner frequency of the subevent. Finite size of the subevent can be taken into account in the model, because the corner frequency of the subevent is included in the model, which is inversely proportional to the length of the subevent. Thus, the proposed model is referred to as the 'pseudo point-source model'. To examine the applicability of the model, a pseudo point-source model was developed for the 2011 Tohoku earthquake. The model comprises nine subevents, located off Miyagi Prefecture through Ibaraki Prefecture. The velocity waveforms (0.2-1 Hz), the velocity envelopes (0.2-10 Hz) and the Fourier spectra (0.2-10 Hz) at 15 sites calculated with the pseudo point-source model agree well with the observed ones, indicating the applicability of the model. Then the results were compared with the results of a super-asperity (SPGA) model of the same earthquake (Nozu, 2012, AGU), which can be considered as an example of characterized source models. Although the pseudo point-source model involves much less model parameters than the super-asperity model, the errors associated with the former model were comparable to those for the latter model for velocity waveforms and envelopes. Furthermore, the errors associated with the former model were much smaller than those for the latter model for Fourier spectra. These evidences indicate the usefulness of the pseudo point-source model. Comparison of the observed (black) and synthetic (red) Fourier spectra. The spectra are the composition of two horizontal components and smoothed with a Parzen window with a band width of 0.05 Hz.
Dewji, Shaheen Azim; Bellamy, Michael B.; Hertel, Nolan E.; ...
2015-09-01
The U.S. Nuclear Regulatory Commission (USNRC) initiated a contract with Oak Ridge National Laboratory (ORNL) to calculate radiation dose rates to members of the public that may result from exposure to patients recently administered iodine-131 ( 131I) as part of medical therapy. The main purpose was to compare dose rate estimates based on a point source and target with values derived from more realistic simulations that considered the time-dependent distribution of 131I in the patient and attenuation of emitted photons by the patient’s tissues. The external dose rate estimates were derived using Monte Carlo methods and two representations of themore » Phantom with Movable Arms and Legs, previously developed by ORNL and the USNRC, to model the patient and a nearby member of the public. Dose rates to tissues and effective dose rates were calculated for distances ranging from 10 to 300 cm between the phantoms and compared to estimates based on the point-source method, as well as to results of previous studies that estimated exposure from 131I patients. The point-source method overestimates dose rates to members of the public in very close proximity to an 131I patient but is a broadly accurate method of dose rate estimation at separation distances of 300 cm or more at times closer to administration.« less
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.
2002-01-01
The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.
NASA Technical Reports Server (NTRS)
Hunt, Mitchell; Sayyah, Rana; Mitchell, Cody; Laws, Crystal; MacLeod, Todd C.; Ho, Fat D.
2013-01-01
Collected data for both common-source and common-gate amplifiers is presented in this paper. Characterizations of the two amplifier circuits using metal-ferroelectric-semiconductor field effect transistors (MFSFETs) are developed with wider input frequency ranges and varying device sizes compared to earlier characterizations. The effects of the ferroelectric layer's capacitance and variation load, quiescent point, or input signal on each circuit are discussed. Comparisons between the MFSFET and MOSFET circuit operation and performance are discussed at length as well as applications and advantages for the MFSFETs.
NASA Technical Reports Server (NTRS)
Fares, Nabil; Li, Victor C.
1986-01-01
An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subcategory of direct discharge point sources that do not use end-of-pipe biological treatment. 414.100... AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Do Not Use End-of-Pipe Biological Treatment § 414.100 Applicability; description of the subcategory of...
Better Assessment Science Integrating Point and Non-point Sources (BASINS)
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.
Unidentified point sources in the IRAS minisurvey
NASA Technical Reports Server (NTRS)
Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.
1984-01-01
Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
Characterization of mercury contamination in the Androscoggin River, Coos County, New Hampshire
Chalmers, Ann; Marvin-DiPasquale, Mark C.; Degnan, James R.; Coles, James; Agee, Jennifer L.; Luce, Darryl
2013-01-01
Concentrations of total mercury (THg) and MeHg in sediment, pore water, and biota in the Androscoggin River were elevated downstream from the former chloralkali facility compared with those upstream from reference sites. Sequential extraction of surface sediment showed a distinct difference in Hg speciation upstream compared with downstream from the contamination site. An upstream site was dominated by potassium hydroxide-extractable forms (for example, organic-Hg or particle-bound Hg(II)), whereas sites downstream from the point source were dominated by more chemically recalcitrant forms (largely concentrated nitric acid-extractable), indicative of elemental mercury or mercurous chloride. At all sites, only a minor fraction (less than 0.1 percent) of THg existed in chemically labile forms (for example, water extractable or weak acid extractable). All metrics indicated that a greater percentage of mercury at an upstream site was available for Hg(II)-methylation compared with sites downstream from the point source, but the absolute concentration of bioavailable Hg(II) was greater downstream from the point source. In addition, the concentration of tin-reducible inorganic reactive mercury, a surrogate measure of bioavailable Hg(II) generally increased with distance downstream from the point source. Whereas concentrations of mercury species on a sediment-dry-weight basis generally reflected the relative location of the sample to the point source, river-reach integrated mercury-species inventories and MeHg production potential (MPP) rates reflected the amount of fine-grained sediment in a given reach. THg concentrations in biota were significantly higher downstream from the point source compared with upstream reference sites for smallmouth bass, white sucker, crayfish, oligochaetes, bat fur, nestling tree swallow blood and feathers, adult tree swallow blood, and tree swallow eggs. As with tin-reducible inorganic reactive mercury, THg in smallmouth bass also increased with distance downstream from the point source. Toxicity tests and invertebrate community assessments suggested that invertebrates were not impaired at the current (2009 and 2010) levels of mercury contamination downstream from the point source. Concentrations of THg and MeHg in most water and sediment samples from the Androscoggin River were below U.S. Environmental Protection Agency (USEPA), the Canadian Council of Ministers of the Environment, and probable effects level guidelines. Surface-water and sediment samples from the Androscoggin River had similar THg concentrations but lower MeHg concentrations compared with other rivers in the region. Concentrations of THg in fish tissue were all above regional and U.S. Environmental Protection Agency guidelines. Moreover, median THg concentrations in smallmouth bass from the Androscoggin River were significantly higher than those reported in regional surveys of river and streams nationwide and in the Northeastern United States and Canada. The higher concentrations of mercury in smallmouth bass suggest conditions may be more favorable for Hg(II)-methylation and bioaccumulation in the Androscoggin River compared with many other rivers in the United States and Canada.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen
2016-06-01
We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.
Very Luminous X-ray Point Sources in Starburst Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.
Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
The resolution of point sources of light as analyzed by quantum detection theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.
NASA Astrophysics Data System (ADS)
Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.
2017-12-01
Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.
Double frequency of difference frequency signals for optical Doppler effect measuring velocity
NASA Astrophysics Data System (ADS)
Yang, Xiufang; Zhou, Renkui; Wei, W. L.; Wang, Xiaoming
2005-12-01
The mathematical model for measuring moving objects (including fluid body, rolled steel materials in the steel works, turbulent flow, vibration body, etc.) velocity or speed by non-contact method is established using light-wave Doppler effect in this paper. In terms of concrete conditions of different optical circuits, and with the correlated conditions substituted, it is easy to obtain the measurement velocity formulas related to optical circuits. An optical circuit layout of difference Doppler effect measuring velocity is suggested in this paper. The fine beam of light emitted by laser is divided into parallel two beam by spectroscope and mirror They are focused on the object point p by a condenser lens respectively. The object point p become a diffuse source. It scatter rays to every aspect. Some rays scattered by the diffuse source p are collected by a lens. Photoelectric detecter receive the lights collected by the lens. This optical circuit layout can realize the double frequency of difference frequency signals in a novel way.
A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Miller, M. Coleman
The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less
SKYSHINEIII. Calculating Effects of Structure Design on Neutron Dose Rates in Air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampley, C.M.; Andrews, C.M.; Wells, M.B.
1988-12-01
SKYSHINE was designed to aid in the evaluation of the effects of structure geometry on the gamma-ray dose rate at given detector positions outside of a building housing gamma-ray sources. The program considers a rectangular structure enclosed by four walls and a roof. Each of the walls and the roof of the building may be subdivided into up to nine different areas, representing different materials or different thicknesses of the same material for those positions of the wall or roof. Basic sets of iron and concrete slab transmission and reflection data for 6.2 MeV gamma-rays are part of the SKYSHINEmore » block data. These data, as well as parametric air transport data for line-beam sources at a number of energies between 0.6 MeV and 6.2 MeV and ranges to 3750 ft, are used to estimate the various components of the gamma-ray dose rate at positions outside of the building. The gamma-ray source is assumed to be a 6.2 MeV point-isotropic source. SKYSHINE-III provides an increase in versatility over the original SKYSHINE code in that it addresses both neutron and gamma-ray point sources. In addition, the emitted radiation may be characterized by an energy emission spectrum defined by the user. A new SKYSHINE data base is also included.« less
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
Accuracy enhancement of point triangulation probes for linear displacement measurement
NASA Astrophysics Data System (ADS)
Kim, Kyung-Chan; Kim, Jong-Ahn; Oh, SeBaek; Kim, Soo Hyun; Kwak, Yoon Keun
2000-03-01
Point triangulation probes (PTBs) fall into a general category of noncontact height or displacement measurement devices. PTBs are widely used for their simple structure, high resolution, and long operating range. However, there are several factors that must be taken into account in order to obtain high accuracy and reliability; measurement errors from inclinations of an object surface, probe signal fluctuations generated by speckle effects, power variation of a light source, electronic noises, and so on. In this paper, we propose a novel signal processing algorithm, named as EASDF (expanded average square difference function), for a newly designed PTB which is composed of an incoherent source (LED), a line scan array detector, a specially selected diffuse reflecting surface, and several optical components. The EASDF, which is a modified correlation function, is able to calculate displacement between the probe and the object surface effectively even if there are inclinations, power fluctuations, and noises.
Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...
NASA Astrophysics Data System (ADS)
Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang
2017-05-01
In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.
Environmental contaminants in bald eagle eggs from the Aleutian archipelago
Anthony, R.G.; Miles, A.K.; Ricca, M.A.; Estes, J.A.
2007-01-01
We collected 136 fresh and unhatched eggs from bald eagle (Haliaeetus leucocephalus) nests and assessed productivity on eight islands in the Aleutian archipelago, 2000 to 2002. Egg contents were analyzed for a broad spectrum of organochlorine (OC) contaminants, mercury (Hg), and stable isotopes of carbon (??13C) and nitrogen (??15N). Concentrations of polychlorinated biphenyls (??PCBs), p,p???- dichlorodiphenyldichloroethylene (DDE), and Hg in bald eagle eggs were elevated throughout the archipelago, but the patterns of distribution differed among the various contaminants. Total PCBs were highest in areas of past military activities on Adak and Amchitka Islands, indicating local point sources of these compounds. Concentrations of DDE and Hg were higher on Amchitka Island, which was subjected to much military activity during World War II and the middle of the 20th century. Concentrations of ??PCBs also were elevated on islands with little history of military activity (e.g., Amlia, Tanaga, Buldir), suggesting non-point sources of PCBs in addition to point sources. Concentrations of DDE and Hg were highest in eagle eggs from the most western Aleutian Islands (e.g., Buldir, Kiska) and decreased eastward along the Aleutian chain. This east-to-west increase suggested a Eurasian source of contamination, possibly through global transport and atmospheric distillation and/or from migratory seabirds. Eggshell thickness and productivity of bald eagles were normal and indicative of healthy populations because concentrations of most contaminants were below threshold levels for effects on reproduction. Contrary to our predictions, contaminant concentrations were not correlated with stable isotopes of carbon (??13C) or nitrogen (??15N) in eggs. These latter findings indicate that contaminant concentrations were influenced more by point sources and geographic location than trophic status of eagles among the different islands. ?? 2007 SETAC.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological Treatment 4 Table 4... Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological...
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
NASA Technical Reports Server (NTRS)
Tibbetts, J. G.
1979-01-01
Methods for predicting noise at any point on an aircraft while the aircraft is in a cruise flight regime are presented. Developed for use in laminar flow control (LFC) noise effects analyses, they can be used in any case where aircraft generated noise needs to be evaluated at a location on an aircraft while under high altitude, high speed conditions. For each noise source applicable to the LFC problem, a noise computational procedure is given in algorithm format, suitable for computerization. Three categories of noise sources are covered: (1) propulsion system, (2) airframe, and (3) LFC suction system. In addition, procedures are given for noise modifications due to source soundproofing and the shielding effects of the aircraft structure wherever needed. Sample cases, for each of the individual noise source procedures, are provided to familiarize the user with typical input and computed data.
A novel solution for LED wall lamp design and simulation
NASA Astrophysics Data System (ADS)
Ge, Rui; Hong, Weibin; Li, Kuangqi; Liang, Pengxiang; Zhao, Fuli
2014-11-01
The model of the wall washer lamp and the practical illumination application have been established with a new design of the lens to meet the uniform illumination demand for wall washer lamp based on the Lambertian light sources. Our secondary optical design of freeform surface lens to LED wall washer lamp based on the conservation law of energy and Snell's law can improve the lighting effects as a uniform illumination. With the relationship between the surface of the lens and the surface of the target, a great number of discrete points of the freeform profile curve were obtained through the iterative method. After importing the data into our modeling program, the optical entity was obtained. Finally, to verify the feasibility of the algorithm, the model was simulated by specialized software, with both the LED Lambertian point source and LED panel source model.
Chen, Yanxi; Niu, Zhiguang; Zhang, Hongwei
2013-06-01
Landscape lakes in the city suffer high eutrophication risk because of their special characters and functions in the water circulation system. Using a landscape lake HMLA located in Tianjin City, North China, with a mixture of point source (PS) pollution and non-point source (NPS) pollution, we explored the methodology of Fluent and AQUATOX to simulate and predict the state of HMLA, and trophic index was used to assess the eutrophication state. Then, we use water compensation optimization and three scenarios to determine the optimal management methodology. Three scenarios include ecological restoration scenario, best management practices (BMPs) scenario, and a scenario combining both. Our results suggest that the maintenance of a healthy ecosystem with ecoremediation is necessary and the BMPs have a far-reaching effect on water reusing and NPS pollution control. This study has implications for eutrophication control and management under development for urbanization in China.
Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui
2013-11-01
With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.
Reaching nearby sources: comparison between real and virtual sound and visual targets
Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.
2014-01-01
Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855
Situational Strength Cues from Social Sources at Work: Relative Importance and Mediated Effects
Alaybek, Balca; Dalal, Reeshad S.; Sheng, Zitong; Morris, Alexander G.; Tomassetti, Alan J.; Holland, Samantha J.
2017-01-01
Situational strength is considered one of the most important situational forces at work because it can attenuate the personality–performance relationship. Although organizational scholars have studied the consequences of situational strength, they have paid little attention to its antecedents. To address this gap, the current study focused on situational strength cues from different social sources as antecedents of overall situational strength at work. Specifically, we examined how employees combine situational strength cues emanating from three social sources (i.e., coworkers, the immediate supervisor, and top management). Based on field theory, we hypothesized that the effect of situational strength from coworkers and immediate supervisors (i.e., proximal sources of situational strength) on employees' perceptions of overall situational strength on the job would be greater than the effect of situational strength from the top management (i.e., the distal source of situational strength). We also hypothesized that the effect of situational strength from the distal source would be mediated by the effects of situational strength from the proximal sources. Data from 363 full-time employees were collected at two time points with a cross-lagged panel design. The former hypothesis was supported for one of the two situational strength facets studied. The latter hypothesis was fully supported. PMID:28928698
NASA Astrophysics Data System (ADS)
Jones, K. R.; Arrowsmith, S.; Whitaker, R. W.
2012-12-01
The overall mission of the National Center for Nuclear Security (NCNS) Source Physics Experiment at the National Nuclear Security Site (SPE-N) near Las Vegas, Nevada is to improve upon and develop new physics based models for underground nuclear explosions using scaled, underground chemical explosions as proxies. To this end, we use the Rayleigh integral as an approximation to the Helmholz-Kirchoff integral, [Whitaker, 2007 and Arrowsmith et al., 2011], to model infrasound generation in the far-field. Infrasound generated by single-point explosive sources above ground can typically be treated as monopole point-sources. While the source is relatively simple, the research needed to model above ground point-sources is complicated by path effects related to the propagation of the acoustic signal and out of the scope of this study. In contrast, for explosions that occur below ground, including the SPE explosions, the source region is more complicated but the observation distances are much closer (< 5 km), thus greatly reducing the complication of path effects. In this case, elastic energy from the explosions radiates upward and spreads out, depending on depth, to a more distributed region at the surface. Due to this broad surface perturbation of the atmosphere we cannot model the source as a simple monopole point-source. Instead, we use the analogy of a piston mounted in a rigid, infinite baffle, where the surface area that moves as a result of the explosion is the piston and the surrounding region is the baffle. The area of the "piston" is determined by the depth and explosive yield of the event. In this study we look at data from SPE-N-2 and SPE-N-3. Both shots had an explosive yield of 1 ton at a depth of 45 m. We collected infrasound data with up to eight stations and 32 sensors within a 5 km radius of ground zero. To determine the area of the surface acceleration, we used data from twelve surface accelerometers installed within 100 m radially about ground zero. With the accelerometer data defining the vertical motion of the surface, we use the Rayleigh Integral Method, [Whitaker, 2007 and Arrowsmith et al., 2011], to generate a synthetic infrasound pulse to compare to the observed data. Because the phase across the "piston" is not necessarily uniform, constructive and destructive interference will change the shape of the acoustic pulse if observed directly above the source (on-axis) or perpendicular to the source (off-axis). Comparing the observed data to the synthetic data we note that the overall structure of the pulse agrees well and that the differences can be attributed to a number of possibilities, including the sensors used, topography, meteorological conditions, etc. One other potential source of error between the observed and calculated data is that we use a flat, symmetric source region for the "piston" where in reality the source region is not flat and not perfectly symmetric. A primary goal of this work is to better understand and model the relationships between surface area, depth, and yield of underground explosions.
Point-particle effective field theory I: classical renormalization and the inverse-square potential
NASA Astrophysics Data System (ADS)
Burgess, C. P.; Hayman, Peter; Williams, M.; Zalavári, László
2017-04-01
Singular potentials (the inverse-square potential, for example) arise in many situations and their quantum treatment leads to well-known ambiguities in choosing boundary conditions for the wave-function at the position of the potential's singularity. These ambiguities are usually resolved by developing a self-adjoint extension of the original prob-lem; a non-unique procedure that leaves undetermined which extension should apply in specific physical systems. We take the guesswork out of this picture by using techniques of effective field theory to derive the required boundary conditions at the origin in terms of the effective point-particle action describing the physics of the source. In this picture ambiguities in boundary conditions boil down to the allowed choices for the source action, but casting them in terms of an action provides a physical criterion for their determination. The resulting extension is self-adjoint if the source action is real (and involves no new degrees of freedom), and not otherwise (as can also happen for reasonable systems). We show how this effective-field picture provides a simple framework for understanding well-known renormalization effects that arise in these systems, including how renormalization-group techniques can resum non-perturbative interactions that often arise, particularly for non-relativistic applications. In particular we argue why the low-energy effective theory tends to produce a universal RG flow of this type and describe how this can lead to the phenomenon of reaction catalysis, in which physical quantities (like scattering cross sections) can sometimes be surprisingly large compared to the underlying scales of the source in question. We comment in passing on the possible relevance of these observations to the phenomenon of the catalysis of baryon-number violation by scattering from magnetic monopoles.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
New approach to calculate the true-coincidence effect of HpGe detector
NASA Astrophysics Data System (ADS)
Alnour, I. A.; Wagiran, H.; Ibrahim, N.; Hamzah, S.; Siong, W. B.; Elias, M. S.
2016-01-01
The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using 57Co, 60Co, 133Ba and 137Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGe detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
Developing Real-Time Emissions Estimates for Enhanced Air Quality Forecasting
Exploring the relationship between ambient temperature, energy demand, and electric generating unit point source emissions and potential techniques for incorporating real-time information on the modulating effects of these variables using the Mid-Atlantic/Northeast Visibility Uni...
Point-source inversion techniques
NASA Astrophysics Data System (ADS)
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Advanced Acoustic Model Technical Reference and User Manual
2009-05-01
the source directed from the source to the receiver. Aspread = Geometrical spherical spreading loss (point source). Aatm = ANSI/ ISO atmospheric...426.1 1,013.4 27000 722 422.5 1,009.2 28000 691 418.9 1,004.9 29000 660 415.4 1,000.6 30000 631 411.8 996.4 A d v a n c e d A c o u s t i c M...sound by molecular relaxation processes in the atmosphere is computed according to the current ANSI/ ISO standard.28 Examples of the weather effects
NASA Astrophysics Data System (ADS)
Hu, Xiang; Zhang, Jing; Hou, Hongxun
2018-01-01
The aim of this study was to investigate the effects of two different external carbon sources (acetate and ethanol) on the nitrous oxide (N2O) emissions during denitrification in biological nutrient removal processes. Results showed that external carbon source significantly influenced N2O emissions during the denitrification process. When acetate served as the external carbon source, 0.49 mg N/L and 0.85 mg N/L of N2O was produced during the denitrificaiton processes in anoxic and anaerobic/anoxic experiments, giving a ratio of N2O-N production to TN removal of 2.37% and 4.96%, respectively. Compared with acetate, the amount of N2O production is negligible when ethanol used as external carbon addition. This suggested that ethanol is a potential alternative external carbon source for acetate from the point of view of N2O emissions.
Mao, Xinrui; Wang, Yujuan; Wu, Yanhong; Guo, Chunyan
2017-01-01
Directed forgetting (DF) assists in preventing outdated information from interfering with cognitive processing. Previous studies pointed that self-referential items alleviated DF effects due to the elaboration of encoding processes. However, the retrieval mechanism of this phenomenon remains unknown. Based on the dual-process framework of recognition, the retrieval of self-referential information was involved in familiarity and recollection. Using source memory tasks combined with event-related potential (ERP) recording, our research investigated the retrieval processes of alleviative DF effects elicited by self-referential information. The FN400 (frontal negativity at 400 ms) is a frontal potential at 300-500 ms related to familiarity and the late positive complex (LPC) is a later parietal potential at 500-800 ms related to recollection. The FN400 effects of source memory suggested that familiarity processes were promoted by self-referential effects without the modulation of to-be-forgotten (TBF) instruction. The ERP results of DF effects were involved with LPCs of source memory, which indexed retrieval processing of recollection. The other-referential source memory of TBF instruction caused the absence of LPC effects, while the self-referential source memory of TBF instruction still elicited the significant LPC effects. Therefore, our neural findings suggested that self-referential processing improved both familiarity and recollection. Furthermore, the self-referential processing advantage which was caused by the autobiographical retrieval alleviated retrieval inhibition of DF, supporting that the self-referential source memory alleviated DF effects.
Searches for point sources in the Galactic Center region
NASA Astrophysics Data System (ADS)
di Mauro, Mattia; Fermi-LAT Collaboration
2017-01-01
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
NASA Astrophysics Data System (ADS)
Khun, Josef; Scholtz, Vladimír; Hozák, Pavel; Fitl, Přemysl; Julák, Jaroslav
2018-06-01
The appearance of several types of ballast serial impedance-stabilized DC-driven electric corona discharges in the point-to-plane configuration is described. In addition to well-known corona discharges, new ones were observed, namely curved transient spark, interrupted channel and branched transient spark. Their properties are described by volt-ampere characteristics and UV-vis emission spectra. Their bactericidal ability for two bacterial species is also given.
NASA Astrophysics Data System (ADS)
Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei
2013-09-01
China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.
Podsakoff, Nathan P; Whiting, Steven W; Welsh, David T; Mai, Ke Michael
2013-09-01
Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixed-effects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research. PsycINFO Database Record (c) 2013 APA, all rights reserved
A Spectroscopic and Photometric Study of Gravitational Microlensing Events
NASA Astrophysics Data System (ADS)
Kane, Stephen R.
2000-08-01
Gravitational microlensing has generated a great deal of scientific interest over recent years. This has been largely due to the realization of its wide-reaching applications, such as the search for dark matter, the detection of planets, and the study of Galactic structure. A significant observational advance has been that most microlensing events can be identified in real-time while the source is still being lensed. More than 400 microlensing events have now been detected towards the Galactic bulge and Magellanic Clouds by the microlensing survey teams EROS, MACHO, OGLE, DUO, and MOA. The real-time detection of these events allows detailed follow-up observations with much denser sampling, both photometrically and spectroscopically. The research undertaken in this project on photometric studies of gravitational microlensing events has been performed as a member of the PLANET (Probing Lensing Anomalies NETwork) collaboration. This is a worldwide collaboration formed in the early part of 1995 to study microlensing anomalies - departures from an achromatic point source, point lens light curve - through rapidly-sampled, multi-band, photometry. PLANET has demonstrated that it can achieve 1% photometry under ideal circumstances, making PLANET observations sensitive to detection of Earth-mass planets which require characterization of 1%--2% deviations from a standard microlensing light curve. The photometric work in this project involved over 5 months using the 1.0 m telescope at Canopus Observatory in Australia, and 3 separate observing runs using the 0.9 m telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Methods were developed to reduce the vast amount of photometric data using the image analysis software MIDAS and the photometry package DoPHOT. Modelling routines were then written to analyse a selection of the resulting light curves in order to detect any deviation from an achromatic point source - point lens light curve. The photometric results presented in this thesis are from observations of 34 microlensing events over three consecutive bulge seasons. These results are presented along with a discussion of the observations and the data reduction procedures. The colour-magnitude diagrams indicate that the microlensed sources are main sequence and red clump giant stars. Most of the events appear to exhibit standard Paczynski point source - point lens curves whilst a few deviate significantly from the standard model. Various microlensing models that include anomalous structure are fitted to a selection of the observed events resulting in the discovery of a possible binary source event. These fitted events are used to estimate the sensitivity to extra-solar planets and it is found that the sampling rate for these events was insufficient by about a factor of 7.5 for detecting a Jupiter-mass planet. This result assumes that deviations of 5% can be reliably detected. If microlensing is caused predominantly by bulge stars, as has been suggested by Kiraga and Paczynski, the lensed stars should have larger extinction than other observed stars since they would preferentially be located at the far side of the Galactic bulge. Hence, spectroscopy of Galactic microlensing events may be used as a tool for studying the kinematics and extinction effects in the Galactic bulge. The spectroscopic work in this project involved using Kurucz model spectra to create theoretical extinction effects for various spectral classes towards the Galactic centre. These extinction effects are then used to interpret spectroscopic data taken with the 3.6 m ESO telescope. These data consist of a sample of microlensed stars towards the Galactic bulge and are used to derive the extinction offsets of the lensed source with respect to the average population and a measurement of the fraction of bulge-bulge lensing is made. Hence, it is shown statistically that the microlensed sources are generally located on the far side of the Galactic bulge. Measurements of the radial velocities of these sources are used to determine the kinematic properties of the far side of the Galactic bulge.
Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E
2017-11-01
Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Development and Characterization of a Laser-Induced Acoustic Desorption Source.
Huang, Zhipeng; Ossenbrüggen, Tim; Rubinsky, Igor; Schust, Matthias; Horke, Daniel A; Küpper, Jochen
2018-03-20
A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a nonthermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.
Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe
NASA Technical Reports Server (NTRS)
Isaacson, Jeffrey A.; Canizares, Claude R.
1989-01-01
Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.
1987-09-18
breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.
Mao, Xinrui; Wang, Yujuan; Wu, Yanhong; Guo, Chunyan
2017-01-01
Directed forgetting (DF) assists in preventing outdated information from interfering with cognitive processing. Previous studies pointed that self-referential items alleviated DF effects due to the elaboration of encoding processes. However, the retrieval mechanism of this phenomenon remains unknown. Based on the dual-process framework of recognition, the retrieval of self-referential information was involved in familiarity and recollection. Using source memory tasks combined with event-related potential (ERP) recording, our research investigated the retrieval processes of alleviative DF effects elicited by self-referential information. The FN400 (frontal negativity at 400 ms) is a frontal potential at 300–500 ms related to familiarity and the late positive complex (LPC) is a later parietal potential at 500–800 ms related to recollection. The FN400 effects of source memory suggested that familiarity processes were promoted by self-referential effects without the modulation of to-be-forgotten (TBF) instruction. The ERP results of DF effects were involved with LPCs of source memory, which indexed retrieval processing of recollection. The other-referential source memory of TBF instruction caused the absence of LPC effects, while the self-referential source memory of TBF instruction still elicited the significant LPC effects. Therefore, our neural findings suggested that self-referential processing improved both familiarity and recollection. Furthermore, the self-referential processing advantage which was caused by the autobiographical retrieval alleviated retrieval inhibition of DF, supporting that the self-referential source memory alleviated DF effects. PMID:29066962
A computational framework for automation of point defect calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
Effects of phonon broadening on x-ray near-edge spectra in molecular crystals
NASA Astrophysics Data System (ADS)
Vinson, John; Jach, Terrence; Elam, Tim; Denlinger, Jonathon
2014-03-01
Calculations of near-edge x-ray spectra are often carried out using the average atomic coordinates from x-ray or neutron scattering experiments or from density functional theory (DFT) energy minimization. This neglects disorder from thermal and zero-point vibrations. Here we look at the nitrogen K-edge of ammonium chloride and ammonium nitrate, comparing Bethe-Salpeter calculations of absorption and fluorescence to experiment. We find that intra-molecular vibrational effects lead to significant, non-uniform broadening of the spectra, and that for some features zero-point motion is the primary source of the observed shape.
A computational framework for automation of point defect calculations
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...
2017-01-13
We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Webb, Jay C.
1994-01-01
In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The effectiveness of the correction factor in providing improvements to the computed solution is demonstrated in this paper.
Mapping algorithm for freeform construction using non-ideal light sources
NASA Astrophysics Data System (ADS)
Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.
2015-09-01
Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.
Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann
2012-02-01
A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America
Chandra Observations of the M31
NASA Technical Reports Server (NTRS)
Garcia, Michael; Lavoie, Anthony R. (Technical Monitor)
2000-01-01
We report on Chandra observations of the nearest Spiral Galaxy, M3l, The nuclear source seen with previous X-ray observatories is resolved into five point sources. One of these sources is within 1 arc-sec of the M31 central super-massive black hole. As compared to the other point sources in M3l. this nuclear source has an unusually soft spectrum. Based on the spatial coincidence and the unusual spectrum. we identify this source with the central black hole. A bright transient is detected 26 arc-sec to the west of the nucleus, which may be associated with a stellar mass black hole. We will report on a comparison of the x-ray spectrum of the diffuse emission and point sources seen in the central few arcmin
Waveform inversion of volcano-seismic signals for an extended source
Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.
2007-01-01
We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.
A COMPUTATIONAL FRAMEWORK FOR EVALUATION OF NPS MANAGEMENT SCENARIOS: ROLE OF PARAMETER UNCERTAINTY
Utility of complex distributed-parameter watershed models for evaluation of the effectiveness of non-point source sediment and nutrient abatement scenarios such as Best Management Practices (BMPs) often follows the traditional {calibrate ---> validate ---> predict} procedure. Des...
Managing Conflict for Productive Results: A Critical Leadership Skill.
ERIC Educational Resources Information Center
Simerly, Robert G.
1998-01-01
Describes sources of conflict in organizations and five effective management strategies: identify points of view, let parties articulate what they want, buy time, attempt negotiation, and ask parties to agree to arbitration. Provides a conflict management analysis sheet. (SK)
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment
NASA Astrophysics Data System (ADS)
Tsang, Mankei
2018-02-01
I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.
Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.
2013-01-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Dorazio, Robert M; Martin, Julien; Edwards, Holly H
2013-07-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Modeling diffuse phosphorus emissions to assist in best management practice designing
NASA Astrophysics Data System (ADS)
Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne
2010-05-01
A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.
Finite Element modelling of deformation induced by interacting volcanic sources
NASA Astrophysics Data System (ADS)
Pascal, Karen; Neuberg, Jürgen; Rivalta, Eleonora
2010-05-01
The displacement field due to magma movements in the subsurface is commonly modelled using the solutions for a point source (Mogi, 1958), a finite spherical source (McTigue, 1987), or a dislocation source (Okada, 1992) embedded in a homogeneous elastic half-space. When the magmatic system comprises more than one source, the assumption of homogeneity in the half-space is violated and several sources are combined, their respective deformation field being summed. We have investigated the effects of neglecting the interaction between sources on the surface deformation field. To do so, we calculated the vertical and horizontal displacements for models with adjacent sources and we tested them against the solutions of corresponding numerical 3D finite element models. We implemented several models combining spherical pressure sources and dislocation sources, varying their relative position. Furthermore we considered the impact of topography, loading, and magma compressibility. To quantify the discrepancies and compare the various models, we calculated the difference between analytical and numerical maximum horizontal or vertical surface displacements.We will demonstrate that for certain conditions combining analytical sources can cause an error of up to 20%. References: McTigue, D. F. (1987), Elastic Stress and Deformation Near a Finite Spherical Magma Body: Resolution of the Point Source Paradox, J. Geophys. Res. 92, 12931-12940. Mogi, K. (1958), Relations between the eruptions of various volcanoes and the deformations of the ground surfaces around them, Bull Earthquake Res Inst, Univ Tokyo 36, 99-134. Okada, Y. (1992), Internal Deformation Due to Shear and Tensile Faults in a Half-Space, Bulletin of the Seismological Society of America 82(2), 1018-1040.
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.; Elam, Kristie A.
2002-01-01
To locate noise sources in high-speed jets, the sound pressure fluctuations p', measured at far field locations, were correlated with each of radial velocity v, density rho, and phov(exp 2) fluctuations measured from various points in jet plumes. The experiments follow the cause-and-effect method of sound source identification, where
Area Source Emission Measurements Using EPA OTM 10
Measurement of air pollutant emissions from area and non-point sources is an emerging environmental concern. Due to the spatial extent and non-homogenous nature of these sources, assessment of fugitive emissions using point sampling techniques can be difficult. To help address th...
BACTERIA SOURCE TRACKING AND HOST SPECIES SPECIFICITY ANALYSIS
Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the pollu...
NASA Astrophysics Data System (ADS)
Scordo, A.; Curceanu, C.; Miliucci, M.; Shi, H.; Sirghi, F.; Zmeskal, J.
2018-04-01
Bragg spectroscopy is one of the best established experimental methods for high energy resolution X-ray measurements and has been widely used in several fields, going from fundamental physics to quantum mechanics tests, synchrotron radiation and X-FEL applications, astronomy, medicine and industry. However, this technique is limited to the measurement of photons produced from well collimated or point-like sources and becomes quite inefficient for photons coming from extended and diffused sources like those, for example, emitted in the exotic atoms radiative transitions. The VOXES project's goal is to realise a prototype of a high resolution and high precision X-ray spectrometer, using Highly Annealed Pyrolitic Graphite (HAPG) crystals in the Von Hamos configuration, working also for extended sources. The aim is to deliver a cost effective system having an energy resolution at the level of eV for X-ray energies from about 2 keV up to tens of keV, able to perform sub-eV precision measurements with non point-like sources. In this paper, the working principle of VOXES, together with first results, are presented.
Periodic diffraction correlation imaging without a beam-splitter.
Li, Hu; Chen, Zhipeng; Xiong, Jin; Zeng, Guihua
2012-01-30
In this paper, we proposed and demonstrated a new correlation imaging mechanism based on the periodic diffraction effect. In this effect, a periodic intensity pattern is generated at the output surface of a periodic point source array. This novel correlation imaging mechanism can realize super-resolution imaging, Nth-order ghost imaging without a beam-splitter and correlation microscopy.
Modeling the contribution of point sources and non-point sources to Thachin River water pollution.
Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth
2009-08-15
Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.
NASA Technical Reports Server (NTRS)
Panda, Jayanta; Seasholtz, Richard G.
2003-01-01
Noise sources in high-speed jets were identified by directly correlating flow density fluctuation (cause) to far-field sound pressure fluctuation (effect). The experimental study was performed in a nozzle facility at the NASA Glenn Research Center in support of NASA s initiative to reduce the noise emitted by commercial airplanes. Previous efforts to use this correlation method have failed because the tools for measuring jet turbulence were intrusive. In the present experiment, a molecular Rayleigh-scattering technique was used that depended on laser light scattering by gas molecules in air. The technique allowed accurate measurement of air density fluctuations from different points in the plume. The study was conducted in shock-free, unheated jets of Mach numbers 0.95, 1.4, and 1.8. The turbulent motion, as evident from density fluctuation spectra was remarkably similar in all three jets, whereas the noise sources were significantly different. The correlation study was conducted by keeping a microphone at a fixed location (at the peak noise emission angle of 30 to the jet axis and 50 nozzle diameters away) while moving the laser probe volume from point to point in the flow. The following figure shows maps of the nondimensional coherence value measured at different Strouhal frequencies ([frequency diameter]/jet speed) in the supersonic Mach 1.8 and subsonic Mach 0.95 jets. The higher the coherence, the stronger the source was.
Triangulation in aetiological epidemiology
Lawlor, Debbie A; Tilling, Kate; Davey Smith, George
2016-01-01
Abstract Triangulation is the practice of obtaining more reliable answers to research questions through integrating results from several different approaches, where each approach has different key sources of potential bias that are unrelated to each other. With respect to causal questions in aetiological epidemiology, if the results of different approaches all point to the same conclusion, this strengthens confidence in the finding. This is particularly the case when the key sources of bias of some of the approaches would predict that findings would point in opposite directions if they were due to such biases. Where there are inconsistencies, understanding the key sources of bias of each approach can help to identify what further research is required to address the causal question. The aim of this paper is to illustrate how triangulation might be used to improve causal inference in aetiological epidemiology. We propose a minimum set of criteria for use in triangulation in aetiological epidemiology, summarize the key sources of bias of several approaches and describe how these might be integrated within a triangulation framework. We emphasize the importance of being explicit about the expected direction of bias within each approach, whenever this is possible, and seeking to identify approaches that would be expected to bias the true causal effect in different directions. We also note the importance, when comparing results, of taking account of differences in the duration and timing of exposures. We provide three examples to illustrate these points. PMID:28108528
Investigation of Calcium Sulfate’s Contribution to Chemical Off Flavor in Baked Items
2013-09-30
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and...studies if any calcium additive is needed . If shelf life and texture are not adversely effected it may prove to be a cost savings to eliminate...point Quality scale to assess the overall aroma and flavor quality. The 9-point Quality scale is based on the Hedonic scale developed by David Peryam and
Holas, J; Hrncir, M
2002-01-01
An agricultural watershed involves manipulation of soil, water and other natural resources and it has profound impacts on ecosystems. To manage these complex issues, we must understand causes and consequences and interactions-related transport of pollutants, quality of the environment, mitigation measures and policy measures. A ten year period of economic changes has been analysed with respect to sustainable development concerning Zelivka drinking water reservoir and its watershed, where agriculture and forestry are the main human activities. It is recommended that all land users within a catchment area should receive payments for their contribution to water cycle management. Setting up the prevention principles and best management practices financially subsidized by a local water company has been found very effective in both point and non-point source pollution abatement, and the newly prepared Clean Water Programme actively involves local municipal authorities as well. The first step based on systems analysis was to propose effective strategies and select alternative measures and ways for their financing. Long term monitoring of nutrient loads entering the reservoir and hazardous events statistics resulted in maps characterising the territory including vulnerable zones and risk factors. Financing involves providing annual payments to farmers, who undertake to manage specified areas of their land in a particular way and one-off payments to realise proposed issues ensuring soil conservation and watershed ecosystem benefits.
NASA Astrophysics Data System (ADS)
Salha, A. A.; Stevens, D. K.
2015-12-01
Distributed watershed models are essential for quantifying sediment and nutrient loads that originate from point and nonpoint sources. Such models are primary means towards generating pollutant estimates in ungaged watersheds and respond well at watershed scales by capturing the variability in soils, climatic conditions, land uses/covers and management conditions over extended periods of time. This effort evaluates the performance of the Soil and Water Assessment Tool (SWAT) model as a watershed level tool to investigate, manage, and characterize the transport and fate of nutrients in Lower Bear Malad River (LBMR) watershed (Subbasin HUC 16010204) in Utah. Water quality concerns have been documented and are primarily attributed to high phosphorus and total suspended sediment concentrations caused by agricultural and farming practices along with identified point sources (WWTPs). Input data such as Digital Elevation Model (DEM), land use/Land cover (LULC), soils, and climate data for 10 years (2000-2010) is utilized to quantify the LBMR streamflow. Such modeling is useful in developing the required water quality regulations such as Total Maximum Daily Loads (TMDL). Measured concentrations of nutrients were closely captured by simulated monthly nutrient concentrations based on the R2 and Nash- Sutcliffe fitness criteria. The model is expected to be able to identify contaminant non-point sources, identify areas of high pollution risk, locate optimal monitoring sites, and evaluate best management practices to cost-effectively reduce pollution and improve water quality as required by the LBMR watershed's TMDL.
Super-Resolution Imagery by Frequency Sweeping.
1980-08-15
IMAGE RETRIEVAL The above considerations of multiwavelength holography have lead us to determining a means by which the 3-D Fourier space of the...it at a distant bright point source. The point source used need not be derived from a laser. In fact it is preferable for safety purposes to use an LED ...noise and therefore higher reconstructed image quality can be attained by using nonlaser point sources in the reconstruction such as LED or miniature
Improving the seismic small-scale modelling by comparison with numerical methods
NASA Astrophysics Data System (ADS)
Pageot, Damien; Leparoux, Donatienne; Le Feuvre, Mathieu; Durand, Olivier; Côte, Philippe; Capdeville, Yann
2017-10-01
The potential of experimental seismic modelling at reduced scale provides an intermediate step between numerical tests and geophysical campaigns on field sites. Recent technologies such as laser interferometers offer the opportunity to get data without any coupling effects. This kind of device is used in the Mesures Ultrasonores Sans Contact (MUSC) measurement bench for which an automated support system makes possible to generate multisource and multireceivers seismic data at laboratory scale. Experimental seismic modelling would become a great tool providing a value-added stage in the imaging process validation if (1) the experimental measurement chain is perfectly mastered, and thus if the experimental data are perfectly reproducible with a numerical tool, as well as if (2) the effective source is reproducible along the measurement setup. These aspects for a quantitative validation concerning devices with piezoelectrical sources and a laser interferometer have not been yet quantitatively studied in published studies. Thus, as a new stage for the experimental modelling approach, these two key issues are tackled in the proposed paper in order to precisely define the quality of the experimental small-scale data provided by the bench MUSC, which are available in the scientific community. These two steps of quantitative validation are dealt apart any imaging techniques in order to offer the opportunity to geophysicists who want to use such data (delivered as free data) of precisely knowing their quality before testing any imaging technique. First, in order to overcome the 2-D-3-D correction usually done in seismic processing when comparing 2-D numerical data with 3-D experimental measurement, we quantitatively refined the comparison between numerical and experimental data by generating accurate experimental line sources, avoiding the necessity of geometrical spreading correction for 3-D point-source data. The comparison with 2-D and 3-D numerical modelling is based on the Spectral Element Method. The approach shows the relevance of building a line source by sampling several source points, except the boundaries effects on later arrival times. Indeed, the experimental results highlight the amplitude feature and the delay equal to π/4 provided by a line source in the same manner than numerical data. In opposite, the 2-D corrections applied on 3-D data showed discrepancies which are higher on experimental data than on numerical ones due to the source wavelet shape and interferences between different arrivals. The experimental results from the approach proposed here show that discrepancies are avoided, especially for the reflected echoes. Concerning the second point aiming to assess the experimental reproducibility of the source, correlation coefficients of recording from a repeated source impact on a homogeneous model are calculated. The quality of the results, that is, higher than 0.98, allow to calculate a mean source wavelet by inversion of a mean data set. Results obtained on a more realistic model simulating clays on limestones, confirmed the reproducibility of the source impact.
The VLITE Post-Processing Pipeline
NASA Astrophysics Data System (ADS)
Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.
2018-01-01
A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE;
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Automated Mounting Bias Calibration for Airborne LIDAR System
NASA Astrophysics Data System (ADS)
Zhang, J.; Jiang, W.; Jiang, S.
2012-07-01
Mounting bias is the major error source of Airborne LIDAR system. In this paper, an automated calibration method for estimating LIDAR system mounting parameters is introduced. LIDAR direct geo-referencing model is used to calculate systematic errors. Due to LIDAR footprints discretely sampled, the real corresponding laser points are hardly existence among different strips. The traditional corresponding point methodology does not seem to apply to LIDAR strip registration. We proposed a Virtual Corresponding Point Model to resolve the corresponding problem among discrete laser points. Each VCPM contains a corresponding point and three real laser footprints. Two rules are defined to calculate tie point coordinate from real laser footprints. The Scale Invariant Feature Transform (SIFT) is used to extract corresponding points in LIDAR strips, and the automatic flow of LIDAR system calibration based on VCPM is detailed described. The practical examples illustrate the feasibility and effectiveness of the proposed calibration method.
Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm
NASA Astrophysics Data System (ADS)
Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian
2018-03-01
In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.
Interventions to improve water quality for preventing diarrhoea
Clasen, Thomas F; Alexander, Kelly T; Sinclair, David; Boisson, Sophie; Peletz, Rachel; Chang, Howard H; Majorin, Fiona; Cairncross, Sandy
2015-01-01
Background Diarrhoea is a major cause of death and disease, especially among young children in low-income countries. In these settings, many infectious agents associated with diarrhoea are spread through water contaminated with faeces. In remote and low-income settings, source-based water quality improvement includes providing protected groundwater (springs, wells, and bore holes), or harvested rainwater as an alternative to surface sources (rivers and lakes). Point-of-use water quality improvement interventions include boiling, chlorination, flocculation, filtration, or solar disinfection, mainly conducted at home. Objectives To assess the effectiveness of interventions to improve water quality for preventing diarrhoea. Search methods We searched the Cochrane Infectious Diseases Group Specialized Register (11 November 2014), CENTRAL (the Cochrane Library, 7 November 2014), MEDLINE (1966 to 10 November 2014), EMBASE (1974 to 10 November 2014), and LILACS (1982 to 7 November 2014). We also handsearched relevant conference proceedings, contacted researchers and organizations working in the field, and checked references from identified studies through 11 November 2014. Selection criteria Randomized controlled trials (RCTs), quasi-RCTs, and controlled before-and-after studies (CBA) comparing interventions aimed at improving the microbiological quality of drinking water with no intervention in children and adults. Data collection and analysis Two review authors independently assessed trial quality and extracted data. We used meta-analyses to estimate pooled measures of effect, where appropriate, and investigated potential sources of heterogeneity using subgroup analyses. We assessed the quality of evidence using the GRADE approach. Main results Forty-five cluster-RCTs, two quasi-RCTs, and eight CBA studies, including over 84,000 participants, met the inclusion criteria. Most included studies were conducted in low- or middle-income countries (LMICs) (50 studies) with unimproved water sources (30 studies) and unimproved or unclear sanitation (34 studies). The primary outcome in most studies was self-reported diarrhoea, which is at high risk of bias due to the lack of blinding in over 80% of the included studies. Source-based water quality improvements There is currently insufficient evidence to know if source-based improvements such as protected wells, communal tap stands, or chlorination/filtration of community sources consistently reduce diarrhoea (one cluster-RCT, five CBA studies, very low quality evidence). We found no studies evaluating reliable piped-in water supplies delivered to households. Point-of-use water quality interventions On average, distributing water disinfection products for use at the household level may reduce diarrhoea by around one quarter (Home chlorination products: RR 0.77, 95% CI 0.65 to 0.91; 14 trials, 30,746 participants, low quality evidence; flocculation and disinfection sachets: RR 0.69, 95% CI 0.58 to 0.82, four trials, 11,788 participants, moderate quality evidence). However, there was substantial heterogeneity in the size of the effect estimates between individual studies. Point-of-use filtration systems probably reduce diarrhoea by around a half (RR 0.48, 95% CI 0.38 to 0.59, 18 trials, 15,582 participants, moderate quality evidence). Important reductions in diarrhoea episodes were shown with ceramic filters, biosand systems and LifeStraw® filters; (Ceramic: RR 0.39, 95% CI 0.28 to 0.53; eight trials, 5763 participants, moderate quality evidence; Biosand: RR 0.47, 95% CI 0.39 to 0.57; four trials, 5504 participants, moderate quality evidence; LifeStraw®: RR 0.69, 95% CI 0.51 to 0.93; three trials, 3259 participants, low quality evidence). Plumbed in filters have only been evaluated in high-income settings (RR 0.81, 95% CI 0.71 to 0.94, three trials, 1056 participants, fixed effects model). In low-income settings, solar water disinfection (SODIS) by distribution of plastic bottles with instructions to leave filled bottles in direct sunlight for at least six hours before drinking probably reduces diarrhoea by around a third (RR 0.62, 95% CI 0.42 to 0.94; four trials, 3460 participants, moderate quality evidence). In subgroup analyses, larger effects were seen in trials with higher adherence, and trials that provided a safe storage container. In most cases, the reduction in diarrhoea shown in the studies was evident in settings with improved and unimproved water sources and sanitation. Authors' conclusions Interventions that address the microbial contamination of water at the point-of-use may be important interim measures to improve drinking water quality until homes can be reached with safe, reliable, piped-in water connections. The average estimates of effect for each individual point-of-use intervention generally show important effects. Comparisons between these estimates do not provide evidence of superiority of one intervention over another, as such comparisons are confounded by the study setting, design, and population. Further studies assessing the effects of household connections and chlorination at the point of delivery will help improve our knowledge base. As evidence suggests effectiveness improves with adherence, studies assessing programmatic approaches to optimising coverage and long-term utilization of these interventions among vulnerable populations could also help strategies to improve health outcomes. PLAIN LANGUAGE SUMMARY Interventions to improve water quality and prevent diarrhoea This Cochrane Review summarizes trials evaluating different interventions to improve water quality and prevent diarrhoea. After searching for relevant trials up to 11 November 2014, we included 55 studies enrolling over 84,000 participants. Most included studies were conducted in low- or middle-income countries (LMICs) (50 studies), with unimproved water sources (30 studies), and unimproved or unclear sanitation (34 studies). What causes diarrhoea and what water quality interventions might prevent diarrhoea? Diarrhoea is a major cause of death and disease, especially among young children in low-income countries where the most common causes are faecally contaminated water and food, or poor hygiene practices. In remote and low-income settings, source-based water quality improvement may include providing protected groundwater (springs, wells, and bore holes) or harvested rainwater as an alternative to surface sources (rivers and lakes). Alternatively water may be treated at the point-of-use in people's homes by boiling, chlorination, flocculation, filtration, or solar disinfection. These point-of-use interventions have the potential to overcome both contaminated sources and recontamination of safe water in the home. What the research says There is currently insufficient evidence to know if source-based improvements in water supplies, such as protected wells and communal tap stands or treatment of communal supplies, consistently reduce diarrhoea in low-income settings (very low quality evidence). We found no trials evaluating reliable piped-in water supplies to people's homes. On average, distributing disinfection products for use in the home may reduce diarrhoea by around one quarter in the case of chlorine products (low quality evidence), and around a third in the case of flocculation and disinfection sachets (moderate quality evidence). Water filtration at home probably reduces diarrhoea by around a half (moderate quality evidence), and effects were consistently seen with ceramic filters (moderate quality evidence), biosand systems (moderate quality evidence) and LifeStraw® filters (low quality evidence). Plumbed-in filtration has only been evaluated in high-income settings (low quality evidence). In low-income settings, distributing plastic bottles with instructions to leave filled bottles in direct sunlight for at least six hours before drinking probably reduces diarrhoea by around a third (moderate quality evidence). Research assessing the effects of household connections and chlorination at the point of delivery will help improve our knowledge base. Evidence indicates the more people use the various interventions for improving water quality, the larger the effects, so research into practical approaches to increase coverage and help assure long term use of them in poor groups will help improve impact. PMID:26488938
Field demonstration of foam injection to confine a chlorinated solvent source zone.
Portois, Clément; Essouayed, Elyess; Annable, Michael D; Guiserix, Nathalie; Joubert, Antoine; Atteia, Olivier
2018-05-01
A novel approach using foam to manage hazardous waste was successfully demonstrated under active site conditions. The purpose of the foam was to divert groundwater flow, that would normally enter the source zone area, to reduce dissolved contaminant release to the aquifer. During the demonstration, foam was pre generated and directly injected surrounding the chlorinated solvent source zone. Despite the constraints related to the industrial activities and non-optimal position of the injection points, the applicability and effectiveness of the approach have been highlighted using multiple metrics. A combination of measurements and modelling allowed definition of the foam extent surrounding each injection point, and this appears to be the critical metric to define the success of the foam injection approach. Information on the transport of chlorinated solvents in groundwater showed a decrease of contaminant flux by a factor of 4.4 downstream of the confined area. The effective permeability reduction was maintained over a period of three months. The successful containment provides evidence for consideration of the use of foam to improve traditional flushing techniques, by increasing the targeting of contaminants by remedial agents. Copyright © 2018 Elsevier B.V. All rights reserved.
DNA BASED MOLECULAR METHODS FOR BACTERIAL SOURCE TRACKING IN WATERSHEDS
Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the po...
AIR TOXICS ASSESSMENT REFINEMENT IN RAPCA'S JURISDICTION - DAYTON, OH AREA
RAPCA has receive two grants to conduct this project. As part of the original project, RAPCA has improved and expanded their point source inventory by converting the following area sources to point sources: dry cleaners, gasoline throughput processes and halogenated solvent clea...
Effects of atmospheric variations on acoustic system performance
NASA Technical Reports Server (NTRS)
Nation, Robert; Lang, Stephen; Olsen, Robert; Chintawongvanich, Prasan
1993-01-01
Acoustic propagation over medium to long ranges in the atmosphere is subject to many complex, interacting effects. Of particular interest at this point is modeling low frequency (less than 500 Hz) propagation for the purpose of predicting ranges and bearing accuracies at which acoustic sources can be detected. A simple means of estimating how much of the received signal power propagated directly from the source to the receiver and how much was received by turbulent scattering was developed. The correlations between the propagation mechanism and detection thresholds, beamformer bearing estimation accuracies, and beamformer processing gain of passive acoustic signal detection systems were explored.
Probing dim point sources in the inner Milky Way using PCAT
NASA Astrophysics Data System (ADS)
Daylan, Tansu; Portillo, Stephen K. N.; Finkbeiner, Douglas P.
2017-01-01
Poisson regression of the Fermi-LAT data in the inner Milky Way reveals an extended gamma-ray excess. An important question is whether the signal is coming from a collection of unresolved point sources, possibly old recycled pulsars, or constitutes a truly diffuse emission component. Previous analyses have relied on non-Poissonian template fits or wavelet decomposition of the Fermi-LAT data, which find evidence for a population of dim point sources just below the 3FGL flux limit. In order to be able to draw conclusions about the flux distribution of point sources at the dim end, we employ a Bayesian trans-dimensional MCMC framework by taking samples from the space of catalogs consistent with the observed gamma-ray emission in the inner Milky Way. The software implementation, PCAT (Probabilistic Cataloger), is designed to efficiently explore that catalog space in the crowded field limit such as in the galactic plane, where the model PSF, point source positions and fluxes are highly degenerate. We thus generate fair realizations of the underlying MSP population in the inner galaxy and constrain the population characteristics such as the radial and flux distribution of such sources.
NASA Astrophysics Data System (ADS)
Williams, Benjamin F.; Wold, Brian; Haberl, Frank; Garofali, Kristen; Blair, William P.; Gaetz, Terrance J.; Kuntz, K. D.; Long, Knox S.; Pannuti, Thomas G.; Pietsch, Wolfgang; Plucinsky, Paul P.; Winkler, P. Frank
2015-05-01
We have obtained a deep 8 field XMM-Newton mosaic of M33 covering the galaxy out to the D25 isophote and beyond to a limiting 0.2-4.5 keV unabsorbed flux of 5 × 10-16 erg cm-2 s-1 (L \\gt 4 × 1034 erg s-1 at the distance of M33). These data allow complete coverage of the galaxy with high sensitivity to soft sources such as diffuse hot gas and supernova remnants (SNRs). Here, we describe the methods we used to identify and characterize 1296 point sources in the 8 fields. We compare our resulting source catalog to the literature, note variable sources, construct hardness ratios, classify soft sources, analyze the source density profile, and measure the X-ray luminosity function (XLF). As a result of the large effective area of XMM-Newton below 1 keV, the survey contains many new soft X-ray sources. The radial source density profile and XLF for the sources suggest that only ˜15% of the 391 bright sources with L \\gt 3.6 × 1035 erg s-1 are likely to be associated with M33, and more than a third of these are known SNRs. The log(N)-log(S) distribution, when corrected for background contamination, is a relatively flat power law with a differential index of 1.5, which suggests that many of the other M33 sources may be high-mass X-ray binaries. Finally, we note the discovery of an interesting new transient X-ray source, which we are unable to classify.
Transient Point Infiltration In The Unsaturated Zone
NASA Astrophysics Data System (ADS)
Buecker-Gittel, M.; Mohrlok, U.
The risk assessment of leaking sewer pipes gets more and more important due to urban groundwater management and environmental as well as health safety. This requires the quantification and balancing of transport and transformation processes based on the water flow in the unsaturated zone. The water flow from a single sewer leakage could be described as a point infiltration with time varying hydraulic conditions externally and internally. External variations are caused by the discharge in the sewer pipe as well as the state of the leakage itself. Internal variations are the results of microbiological clogging effects associated with the transformation processes. Technical as well as small scale laboratory experiments were conducted in order to investigate the water transport from an transient point infiltration. From the technical scale experiment there was evidence that the water flow takes place under transient conditions when sewage infiltrates into an unsaturated soil. Whereas the small scale experiments investigated the hydraulics of the water transport and the associated so- lute and particle transport in unsaturated soils in detail. The small scale experiment was a two-dimensional representation of such a point infiltration source where the distributed water transport could be measured by several tensiometers in the soil as well as by a selective measurement of the discharge at the bottom of the experimental setup. Several series of experiments were conducted varying the boundary and initial con- ditions in order to derive the important parameters controlling the infiltration of pure water from the point source. The results showed that there is a significant difference between the infiltration rate in the point source and the discharge rate at the bottom, that could be explained by storage processes due to an outflow resistance at the bottom. This effect is overlayn by a decreasing water content decreases over time correlated with a decreasing infiltration rate. As expected the initial conditions mainly affects the time scale for the water transport. Additionally, the influence of preferential flow paths on the discharge distribution could be found due to the heterogenieties caused by the filling and compaction process of the sandy soil.
Simulation of conservation practices using the APEX model
USDA-ARS?s Scientific Manuscript database
Information on agricultural Best Management Practices (BMPs) and their effectiveness in controlling agricultural non-point source pollution is crucial in developing Clean Water Act programs such as the Total Maximum Daily Loads for impaired watersheds. A modeling study was conducted to evaluate var...
Human disturbance alters key attributes of aquatic ecosystems such as water quality, habitat structure, hydrological regime, energy flow, and biological interactions. In great rivers, this is particularly evident because they are disproportionately degraded by habitat alteration...
Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...
Extending the Search for Neutrino Point Sources with IceCube above the Horizon
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Alba, J. L. Bazo; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Carson, M.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miyamoto, H.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Paul, L.; de Los Heros, C. Pérez; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Toale, P. A.; Tooker, J.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.
2009-11-01
Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies.
Complex Pupil Masks for Aberrated Imaging of Closely Spaced Objects
NASA Astrophysics Data System (ADS)
Reddy, A. N. K.; Sagar, D. K.; Khonina, S. N.
2017-12-01
Current approach demonstrates the suppression of optical side-lobes and the contraction of the main lobe in the composite image of two object points of the optical system under the influence of defocusing effect when an asymmetric phase edges are imposed over the apodized circular aperture. The resolution of two point sources having different intensity ratio is discussed in terms of the modified Sparrow criterion, functions of the degree of coherence of the illumination, the intensity difference and the degree of asymmetric phase masking. Here we have introduced and explored the effects of focus aberration (defect-of-focus) on the two-point resolution of the optical systems. Results on the aberrated composite image of closely spaced objects with amplitude mask and asymmetric phase masks forms a significant contribution in astronomical and microscopic observations.
A Computational Framework for Automation of Point Defect Calculations
NASA Astrophysics Data System (ADS)
Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration
A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.
Household Effectiveness vs. Laboratory Efficacy of Point-of-use Chlorination
Levy, Karen; Anderson, Larissa; Robb, Katharine A.; Cevallos, William; Trueba, Gabriel; Eisenberg, Joseph N.S.
2014-01-01
Treatment of water at the household level offers a promising approach to combat the global burden of diarrheal diseases. In particular, chlorination of drinking water has been a widely promoted strategy due to persistence of residual chlorine after initial treatment. However, the degree to which chlorination can reduce microbial levels in a controlled setting (efficacy) or in a household setting (effectiveness) can vary as a function of chlorine characteristics, source water characteristics, and household conditions. To gain more understanding of these factors, we carried out an observational study within households in rural communities of northern coastal Ecuador. We found that the efficacy of chlorine treatment under controlled conditions was significantly better than its effectiveness when evaluated both by ability to meet microbiological safety standards and by log reductions. Water treated with chlorine achieved levels of microbial contamination considered safe for human consumption after 24 hours of storage in the household only 39 – 51% of the time, depending on chlorine treatment regimen. Chlorine treatment would not be considered protective against diarrheal disease according to WHO log reduction standards. Factors that explain the observed compromised effectiveness include: source water turbidity, source water baseline contamination levels, and in-home contamination. Water in 38% of the households that had low turbidity source water (< 10 NTU) met the safe water standard as compared with only 17% of the households that had high turbidity source water (> 10 NTU). A 10 MPN/100mL increase in baseline E. coli levels was associated with a 2.2% increase in failure to meet the E. colistandard. Higher mean microbial contamination levels in 54% of household samples in comparison to their matched controls, which is likely the result of in-home contamination during storage. Container characteristics (size of the container mouth) did not influence chlorine effectiveness. We found no significant differences between chlorine treatment regimens in ability to meet the safe water standards or in overall log reductions, although chlorine dosage did modify the effect of source conditions. These results underscore the importance of measuring both source water and household conditions to determine appropriate chlorine levels, as well as to evaluate the appropriateness of chlorine treatment and other point-of-use water quality improvement interventions. PMID:24561887
Household effectiveness vs. laboratory efficacy of point-of-use chlorination.
Levy, Karen; Anderson, Larissa; Robb, Katharine A; Cevallos, William; Trueba, Gabriel; Eisenberg, Joseph N S
2014-05-01
Treatment of water at the household level offers a promising approach to combat the global burden of diarrheal diseases. In particular, chlorination of drinking water has been a widely promoted strategy due to persistence of residual chlorine after initial treatment. However, the degree to which chlorination can reduce microbial levels in a controlled setting (efficacy) or in a household setting (effectiveness) can vary as a function of chlorine characteristics, source water characteristics, and household conditions. To gain more understanding of these factors, we carried out an observational study within households in rural communities of northern coastal Ecuador. We found that the efficacy of chlorine treatment under controlled conditions was significantly better than its household effectiveness when evaluated both by ability to meet microbiological safety standards and by log reductions. Water treated with chlorine achieved levels of microbial contamination considered safe for human consumption after 24 h of storage in the household only 39-51% of the time, depending on chlorine treatment regimen. Chlorine treatment would not be considered protective against diarrheal disease according to WHO log reduction standards. Factors that explain the observed compromised effectiveness include: source water turbidity, source water baseline contamination levels, and in-home contamination. Water in 38% of the households that had low turbidity source water (<10 NTU) met the safe water standard as compared with only 17% of the households that had high turbidity source water (>10 NTU). A 10 MPN/100 mL increase in baseline Escherichia coli levels was associated with a 2.2% increase in failure to meet the E. coli standard. Higher mean microbial contamination levels were seen in 54% of household samples in comparison to their matched controls, which is likely the result of in-home contamination during storage. Container characteristics (size of the container mouth) did not influence chlorine effectiveness. We found no significant differences between chlorine treatment regimens in ability to meet the safe water standards or in overall log reductions, although chlorine dosage did modify the effect of source conditions. These results underscore the importance of measuring both source water and household conditions to determine appropriate chlorine levels, as well as to evaluate the appropriateness of chlorine treatment and other point-of-use water quality improvement interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.
A double-observer approach for estimating detection probability and abundance from point counts
Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.
2000-01-01
Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.
40 CFR 428.96 - Pretreatment standards for new sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion, and Mechanical... pollutant properties, controlled by this section, which may be discharged to a publicly owned treatment works by a new point source subject to the provisions of this subpart: Pollutant or pollutant property...
FECAL BACTERIA SOURCE TRACKING AND BACTEROIDES SPP. HOST SPECIES SPECIFICITY ANALYSIS
Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the po...
Statistical signatures of a targeted search by bacteria
NASA Astrophysics Data System (ADS)
Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve
2017-12-01
Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.
NASA Astrophysics Data System (ADS)
Chu, Zhigang; Yang, Yang; He, Yansong
2015-05-01
Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.
NASA Astrophysics Data System (ADS)
Saleh, D.; Domagalski, J. L.
2012-12-01
Sources and factors affecting the transport of total nitrogen are being evaluated for a study area that covers most of California and some areas in Oregon and Nevada, by using the SPARROW model (SPAtially Referenced Regression On Watershed attributes) developed by the U.S. Geological Survey. Mass loads of total nitrogen calculated for monitoring sites at stream gauging stations are regressed against land-use factors affecting nitrogen transport, including fertilizer use, recharge, atmospheric deposition, stream characteristics, and other factors to understand how total nitrogen is transported under average conditions. SPARROW models have been used successfully in other parts of the country to understand how nutrients are transported, and how management strategies can be formulated, such as with Total Maximum Daily Load (TMDL) assessments. Fertilizer use, atmospheric deposition, and climatic data were obtained for 2002, and loads for that year were calculated for monitored streams and point sources (mostly from wastewater treatment plants). The stream loads were calculated by using the adjusted maximum likelihood estimation method (AMLE). River discharge and nitrogen concentrations were de-trended in these calculations in order eliminate the effect of temporal changes on stream load. Effluent discharge information as well as total nitrogen concentrations from point sources were obtained from USEPA databases and from facility records. The model indicates that atmospheric deposition and fertilizer use account for a large percentage of the total nitrogen load in many of the larger watersheds throughout the study area. Point sources, on the other hand, are generally localized around large cities, are considered insignificant sources, and account for a small percentage of the total nitrogen loads throughout the study area.
Coronal bright points at 6cm wavelength
NASA Technical Reports Server (NTRS)
Fu, Qijun; Kundu, M. R.; Schmahl, E. J.
1988-01-01
Results are presented from observations of bright points at a wavelength of 6-cm using the VLA with a spatial resolution of 1.2 arcsec. During two hours of observations, 44 sources were detected with brightness temperatures between 2000 and 30,000 K. Of these sources, 27 are associated with weak dark He 10830 A features at distances less than 40 arcsecs. Consideration is given to variations in the source parameters and the relationship between ephemeral regions and bright points.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
Do Birds Avoid Railroads as Has Been Found for Roads?
Wiącek, Jarosław; Polak, Marcin; Filipiuk, Maciej; Kucharczyk, Marek; Bohatkiewicz, Janusz
2015-09-01
The construction of railway lines usually has a negative effect on the natural environment: habitats are destroyed, collisions with trains cause deaths, and the noise and vibrations associated with rail traffic disturb the lives of animals. Cases are known, however, where the opposite holds true: a railway line has a positive effect on the fauna in its vicinity. In this study, we attempted to define the influence of a busy railway line on a breeding community of woodland birds. Birds were counted using the point method at 45 observation points located at three different distances (30, 280, 530 m) from the tracks. At each point, we determined the habitat parameters and the intensity of noise. In total, 791 individual birds of 42 species were recorded on the study plot. Even though the noise level fell distinctly with increasing distance from the tracks, the abundance of birds and the number of species were the highest near the railway line. Moreover, insectivorous species displayed a clear preference for the vicinity of the line. The noise from the trains did not adversely affect the birds on the study plot. The environmental conditions created by the edge effect meant that the birds preferred the neighborhood of the tracks: the more diverse habitats near the tracks supplied attractive nesting and foraging niches for many species of birds. Trains passing at clear intervals acted as point sources of noise and did not elicit any negative reactions on the part of the birds; this stands in contrast to busy roads, where the almost continuous flow of traffic in practice constitutes a linear source of noise.
International migration and welfare in the source country.
Basu, B; Bhattacharyya, G
1991-12-01
"In a recent paper, Rivera-Batiz (1982) points out that the economic effects of migration should be studied in the presence of non-traded goods in the source country.... We will show that Rivera-Batiz's explanations do not consider all the aspects of ownership and transfer of inputs by migrants. Consequently, we will show that if...additional issues are taken into account, the non-migrants can turn out to be actually better off as a result of emigration." excerpt
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, William Scott
This seminar presentation describes amplitude models and yield estimations that look at the data in order to inform legislation. The following points were brought forth in the summary: global models that will predict three-component amplitudes (R-T-Z) were produced; Q models match regional geology; corrected source spectra can be used for discrimination and yield estimation; three-component data increase coverage and reduce scatter in source spectral estimates; three-component efforts must include distance-dependent effects; a community effort on instrument calibration is needed.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
Numerical modeling of subsurface communication
NASA Astrophysics Data System (ADS)
Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.
1985-02-01
Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the Earth. Equations are given for the field of a point source in a homogeneous or stratified earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfield integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.
COST-EFFECTIVE ALLOCATION OF WATERSHED MANAGEMENT PRACTICES USING A GENETIC ALGORITHM
Implementation of conservation programs are perceived as being crucial for restoring and protecting waters and watersheds from non-point source pollution. Success of these programs depends to a great extent on planning tools that can assist the watershed management process. Here-...
A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model
USDA-ARS?s Scientific Manuscript database
Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...
Bru, Juan; Berger, Christopher A
2012-01-01
Background Point-of-care electronic medical records (EMRs) are a key tool to manage chronic illness. Several EMRs have been developed for use in treating HIV and tuberculosis, but their applicability to primary care, technical requirements and clinical functionalities are largely unknown. Objectives This study aimed to address the needs of clinicians from resource-limited settings without reliable internet access who are considering adopting an open-source EMR. Study eligibility criteria Open-source point-of-care EMRs suitable for use in areas without reliable internet access. Study appraisal and synthesis methods The authors conducted a comprehensive search of all open-source EMRs suitable for sites without reliable internet access. The authors surveyed clinician users and technical implementers from a single site and technical developers of each software product. The authors evaluated availability, cost and technical requirements. Results The hardware and software for all six systems is easily available, but they vary considerably in proprietary components, installation requirements and customisability. Limitations This study relied solely on self-report from informants who developed and who actively use the included products. Conclusions and implications of key findings Clinical functionalities vary greatly among the systems, and none of the systems yet meet minimum requirements for effective implementation in a primary care resource-limited setting. The safe prescribing of medications is a particular concern with current tools. The dearth of fully functional EMR systems indicates a need for a greater emphasis by global funding agencies to move beyond disease-specific EMR systems and develop a universal open-source health informatics platform. PMID:22763661
Ockenden, M C; Quinton, J N; Favaretto, N; Deasy, C; Surridge, B
2014-07-01
Surface water quality in the UK and much of Western Europe has improved in recent decades, in response to better point source controls and the regulation of fertilizer, manure and slurry use. However, diffuse sources of pollution, such as leaching or runoff of nutrients from agricultural fields, and micro-point sources including farmyards, manure heaps and septic tank sewerage systems, particularly systems without soil adsorption beds, are now hypothesised to contribute a significant proportion of the nutrients delivered to surface watercourses. Tackling such sources in an integrated manner is vital, if improvements in freshwater quality are to continue. In this research, we consider the combined effect of constructing small field wetlands and improving a septic tank system on stream water quality within an agricultural catchment in Cumbria, UK. Water quality in the ditch-wetland system was monitored by manual sampling at fortnightly intervals (April-October 2011 and February-October 2012), with the septic tank improvement taking place in February 2012. Reductions in nutrient concentrations were observed through the catchment, by up to 60% when considering total phosphorus (TP) entering and leaving a wetland with a long residence time. Average fluxes of TP, soluble reactive phosphorus (SRP) and ammonium-N (NH4-N) at the head of the ditch system in 2011 (before septic tank improvement) compared to 2012 (after septic tank improvement) were reduced by 28%, 9% and 37% respectively. However, TP concentration data continue to show a clear dilution with increasing flow, indicating that the system remained point source dominated even after the septic tank improvement.
Clark-Reyna, Stephanie E.; Grineski, Sara E.; Collins, Timothy W.
2015-01-01
Children in low-income neighborhoods tend to be disproportionately exposed to environmental toxicants. This is cause for concern because exposure to environmental toxicants negatively affect health, which can impair academic success. To date, it is unknown if associations between air toxics and academic performance found in previous school-level studies persist when studying individual children. In pairing the National Air Toxics Assessment (NATA) risk estimates for respiratory and diesel particulate matter risk disaggregated by source, with individual-level data collected through a mail survey, this paper examines the effects of exposure to residential environmental toxics on academic performance for individual children for the first time and adjusts for school-level effects using generalized estimating equations. We find that higher levels of residential air toxics, especially those from non-road mobile sources, are statistically significantly associated with lower grade point averages among fourth and fifth grade school children in El Paso (Texas, USA). PMID:27034529
NASA Astrophysics Data System (ADS)
Smart, Philip D.; Quinn, Jonathan A.; Jones, Christopher B.
The combination of mobile communication technology with location and orientation aware digital cameras has introduced increasing interest in the exploitation of 3D city models for applications such as augmented reality and automated image captioning. The effectiveness of such applications is, at present, severely limited by the often poor quality of semantic annotation of the 3D models. In this paper, we show how freely available sources of georeferenced Web 2.0 information can be used for automated enrichment of 3D city models. Point referenced names of prominent buildings and landmarks mined from Wikipedia articles and from the OpenStreetMaps digital map and Geonames gazetteer have been matched to the 2D ground plan geometry of a 3D city model. In order to address the ambiguities that arise in the associations between these sources and the city model, we present procedures to merge potentially related buildings and implement fuzzy matching between reference points and building polygons. An experimental evaluation demonstrates the effectiveness of the presented methods.
Groundwater flow to a horizontal or slanted well in an unconfined aquifer
NASA Astrophysics Data System (ADS)
Zhan, Hongbin; Zlotnik, Vitaly A.
2002-07-01
New semianalytical solutions for evaluation of the drawdown near horizontal and slanted wells with finite length screens in unconfined aquifers are presented. These fully three-dimensional solutions consider instantaneous drainage or delayed yield and aquifer anisotropy. As a basis, solution for the drawdown created by a point source in a uniform anisotropic unconfined aquifer is derived in Laplace domain. Using superposition, the point source solution is extended to the cases of the horizontal and slanted wells. The previous solutions for vertical wells can be described as a special case of the new solutions. Numerical Laplace inversion allows effective evaluation of the drawdown in real time. Examples illustrate the effects of well geometry and the aquifer parameters on drawdown. Results can be used to generate type curves from observations in piezometers and partially or fully penetrating observation wells. The proposed solutions and software are useful for the parameter identification, design of remediation systems, drainage, and mine dewatering.
NASA Astrophysics Data System (ADS)
Sedghi, Mohammad Mahdi; Samani, Nozar; Sleep, Brent
2009-06-01
The Laplace domain solutions have been obtained for three-dimensional groundwater flow to a well in confined and unconfined wedge-shaped aquifers. The solutions take into account partial penetration effects, instantaneous drainage or delayed yield, vertical anisotropy and the water table boundary condition. As a basis, the Laplace domain solutions for drawdown created by a point source in uniform, anisotropic confined and unconfined wedge-shaped aquifers are first derived. Then, by the principle of superposition the point source solutions are extended to the cases of partially and fully penetrating wells. Unlike the previous solution for the confined aquifer that contains improper integrals arising from the Hankel transform [Yeh HD, Chang YC. New analytical solutions for groundwater flow in wedge-shaped aquifers with various topographic boundary conditions. Adv Water Resour 2006;26:471-80], numerical evaluation of our solution is relatively easy using well known numerical Laplace inversion methods. The effects of wedge angle, pumping well location and observation point location on drawdown and the effects of partial penetration, screen location and delay index on the wedge boundary hydraulic gradient in unconfined aquifers have also been investigated. The results are presented in the form of dimensionless drawdown-time and boundary gradient-time type curves. The curves are useful for parameter identification, calculation of stream depletion rates and the assessment of water budgets in river basins.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Zhang, Lei; Lu, Wenxi; An, Yonglei; Li, Di; Gong, Lei
2012-01-01
The impacts of climate change on streamflow and non-point source pollutant loads in the Shitoukoumen reservoir catchment are predicted by combining a general circulation model (HadCM3) with the Soil and Water Assessment Tool (SWAT) hydrological model. A statistical downscaling model was used to generate future local scenarios of meteorological variables such as temperature and precipitation. Then, the downscaled meteorological variables were used as input to the SWAT hydrological model calibrated and validated with observations, and the corresponding changes of future streamflow and non-point source pollutant loads in Shitoukoumen reservoir catchment were simulated and analyzed. Results show that daily temperature increases in three future periods (2010-2039, 2040-2069, and 2070-2099) relative to a baseline of 1961-1990, and the rate of increase is 0.63°C per decade. Annual precipitation also shows an apparent increase of 11 mm per decade. The calibration and validation results showed that the SWAT model was able to simulate well the streamflow and non-point source pollutant loads, with a coefficient of determination of 0.7 and a Nash-Sutcliffe efficiency of about 0.7 for both the calibration and validation periods. The future climate change has a significant impact on streamflow and non-point source pollutant loads. The annual streamflow shows a fluctuating upward trend from 2010 to 2099, with an increase rate of 1.1 m(3) s(-1) per decade, and a significant upward trend in summer, with an increase rate of 1.32 m(3) s(-1) per decade. The increase in summer contributes the most to the increase of annual load compared with other seasons. The annual NH (4) (+) -N load into Shitoukoumen reservoir shows a significant downward trend with a decrease rate of 40.6 t per decade. The annual TP load shows an insignificant increasing trend, and its change rate is 3.77 t per decade. The results of this analysis provide a scientific basis for effective support of decision makers and strategies of adaptation to climate change.
The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding
NASA Technical Reports Server (NTRS)
Mgana, C. V. M.; Chang, I. D.
1982-01-01
The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.
NASA Astrophysics Data System (ADS)
Wang, Xu-yang; Zhdanov, Dmitry D.; Potemin, Igor S.; Wang, Ying; Cheng, Han
2016-10-01
One of the challenges of augmented reality is a seamless combination of objects of the real and virtual worlds, for example light sources. We suggest a measurement and computation models for reconstruction of light source position. The model is based on the dependence of luminance of the small size diffuse surface directly illuminated by point like source placed at a short distance from the observer or camera. The advantage of the computational model is the ability to eliminate the effects of indirect illumination. The paper presents a number of examples to illustrate the efficiency and accuracy of the proposed method.
MacBurn's cylinder test problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, Aleksei I.
2016-02-29
This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less
New approach to calculate the true-coincidence effect of HpGe detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alnour, I. A., E-mail: aaibrahim3@live.utm.my, E-mail: ibrahim.elnour@yahoo.com; Wagiran, H.; Ibrahim, N.
The corrections for true-coincidence effects in HpGe detector are important, especially at low source-to-detector distances. This work established an approach to calculate the true-coincidence effects experimentally for HpGe detectors of type Canberra GC3018 and Ortec GEM25-76-XLB-C, which are in operation at neutron activation analysis lab in Malaysian Nuclear Agency (NM). The correction for true-coincidence effects was performed close to detector at distances 2 and 5 cm using {sup 57}Co, {sup 60}Co, {sup 133}Ba and {sup 137}Cs as standard point sources. The correction factors were ranged between 0.93-1.10 at 2 cm and 0.97-1.00 at 5 cm for Canberra HpGe detector; whereas for Ortec HpGemore » detector ranged between 0.92-1.13 and 0.95-100 at 2 and 5 cm respectively. The change in efficiency calibration curve of the detector at 2 and 5 cm after correction was found to be less than 1%. Moreover, the polynomial parameters functions were simulated through a computer program, MATLAB in order to find an accurate fit to the experimental data points.« less
HerMES: point source catalogues from Herschel-SPIRE observations II
NASA Astrophysics Data System (ADS)
Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.
2014-11-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).
Warrant, Eric J; Locket, N Adam
2004-08-01
The deep sea is the largest habitat on earth. Its three great faunal environments--the twilight mesopelagic zone, the dark bathypelagic zone and the vast flat expanses of the benthic habitat--are home to a rich fauna of vertebrates and invertebrates. In the mesopelagic zone (150-1000 m), the down-welling daylight creates an extended scene that becomes increasingly dimmer and bluer with depth. The available daylight also originates increasingly from vertically above, and bioluminescent point-source flashes, well contrasted against the dim background daylight, become increasingly visible. In the bathypelagic zone below 1000 m no daylight remains, and the scene becomes entirely dominated by point-like bioluminescence. This changing nature of visual scenes with depth--from extended source to point source--has had a profound effect on the designs of deep-sea eyes, both optically and neurally, a fact that until recently was not fully appreciated. Recent measurements of the sensitivity and spatial resolution of deep-sea eyes--particularly from the camera eyes of fishes and cephalopods and the compound eyes of crustaceans--reveal that ocular designs are well matched to the nature of the visual scene at any given depth. This match between eye design and visual scene is the subject of this review. The greatest variation in eye design is found in the mesopelagic zone, where dim down-welling daylight and bio-luminescent point sources may be visible simultaneously. Some mesopelagic eyes rely on spatial and temporal summation to increase sensitivity to a dim extended scene, while others sacrifice this sensitivity to localise pinpoints of bright bioluminescence. Yet other eyes have retinal regions separately specialised for each type of light. In the bathypelagic zone, eyes generally get smaller and therefore less sensitive to point sources with increasing depth. In fishes, this insensitivity, combined with surprisingly high spatial resolution, is very well adapted to the detection and localisation of point-source bioluminescence at ecologically meaningful distances. At all depths, the eyes of animals active on and over the nutrient-rich sea floor are generally larger than the eyes of pelagic species. In fishes, the retinal ganglion cells are also frequently arranged in a horizontal visual streak, an adaptation for viewing the wide flat horizon of the sea floor, and all animals living there. These and many other aspects of light and vision in the deep sea are reviewed in support of the following conclusion: it is not only the intensity of light at different depths, but also its distribution in space, which has been a major force in the evolution of deep-sea vision.
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%
Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.
2016-01-01
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818
Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.
Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M
2016-09-13
Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.
Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.
Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F
2016-01-01
In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sampayan, Stephen E.
2016-11-22
Apparatus, systems, and methods that provide an X-ray interrogation system having a plurality of stationary X-ray point sources arranged to substantially encircle an area or space to be interrogated. A plurality of stationary detectors are arranged to substantially encircle the area or space to be interrogated, A controller is adapted to control the stationary X-ray point sources to emit X-rays one at a time, and to control the stationary detectors to detect the X-rays emitted by the stationary X-ray point sources.
Winkelmann, Tim; Cee, Rainer; Haberer, Thomas; Naas, Bernd; Peters, Andreas; Schreiner, Jochen
2014-02-01
The clinical operation at the Heidelberg Ion Beam Therapy Center (HIT) started in November 2009; since then more than 1600 patients have been treated. In a 24/7 operation scheme two 14.5 GHz electron cyclotron resonance ion sources are routinely used to produce protons and carbon ions. The modification of the low energy beam transport line and the integration of a third ion source into the therapy facility will be shown. In the last year we implemented a new extraction system at all three sources to enhance the lifetime of extraction parts and reduce preventive and corrective maintenance. The new four-electrode-design provides electron suppression as well as lower beam emittance. Unwanted beam sputtering effects which typically lead to contamination of the insulator ceramics and subsequent high-voltage break-downs are minimized by the beam guidance of the new extraction system. By this measure the service interval can be increased significantly. As a side effect, the beam emittance can be reduced allowing a less challenging working point for the ion sources without reducing the effective beam performance. This paper gives also an outlook to further enhancements at the HIT ion source testbench.
NASA Technical Reports Server (NTRS)
Brown, G. S.; Curry, W. J.
1977-01-01
The statistical error of the pointing angle estimation technique is determined as a function of the effective receiver signal to noise ratio. Other sources of error are addressed and evaluated with inadequate calibration being of major concern. The impact of pointing error on the computation of normalized surface scattering cross section (sigma) from radar and the waveform attitude induced altitude bias is considered and quantitative results are presented. Pointing angle and sigma processing algorithms are presented along with some initial data. The intensive mode clean vs. clutter AGC calibration problem is analytically resolved. The use clutter AGC data in the intensive mode is confirmed as the correct calibration set for the sigma computations.
We investigated the efficacy of metabolomics for field-monitoring of fish exposed to waste water treatment plant (WWTP) effluents and non-point sources of chemical contamination. Lab-reared male fathead minnows (Pimephales promelas, FHM) were held in mobile monitoring units and e...
NASA Astrophysics Data System (ADS)
Edwards, Brian E.; Nitkowski, Arthur; Lawrence, Ryan; Horton, Kasey; Higgs, Charles
2004-10-01
Atmospheric turbulence and laser-induced thermal blooming effects can degrade the beam quality of a high-energy laser (HEL) weapon, and ultimately limit the amount of energy deliverable to a target. Lincoln Laboratory has built a thermal blooming laboratory capable of emulating atmospheric thermal blooming and turbulence effects for tactical HEL systems. The HEL weapon emulation hardware includes an adaptive optics beam delivery system, which utilizes a Shack-Hartman wavefront sensor and a 349 actuator deformable mirror. For this experiment, the laboratory was configured to emulate an engagement scenario consisting of sea skimming target approaching directly toward the HEL weapon at a range of 10km. The weapon utilizes a 1.5m aperture and radiates at a 1.62 micron wavelength. An adaptive optics reference beam was provided as either a point source located at the target (cooperative) or a projected point source reflected from the target (uncooperative). Performance of the adaptive optics system was then compared between reference sources. Results show that, for operating conditions with a thermal blooming distortion number of 75 and weak turbulence (Rytov of 0.02 and D/ro of 3), cooperative beacon AO correction experiences Phase Compensation Instability, resulting in lower performance than a simple, open-loop condition. The uncooperative beacon resulted in slightly better performance than the open-loop condition.
Propulsion Airframe Aeroacoustic Integration Effects for a Hybrid Wing Body Aircraft Configuration
NASA Technical Reports Server (NTRS)
Czech, Michael J.; Thomas, Russell H.; Elkoby, Ronen
2010-01-01
An extensive experimental investigation was performed to study the propulsion airframe aeroacoustic effects of a high bypass ratio engine for a hybrid wing body aircraft configuration where the engine is installed above the wing. The objective was to provide an understanding of the jet noise shielding effectiveness as a function of engine gas condition and location as well as nozzle configuration. A 4.7% scale nozzle of a bypass ratio seven engine was run at characteristic cycle points under static and forward flight conditions. The effect of the pylon and its orientation on jet noise was also studied as a function of bypass ratio and cycle condition. The addition of a pylon yielded significant spectral changes lowering jet noise by up to 4dB at high polar angles and increasing it by 2 to 3dB at forward angles. In order to assess jet noise shielding, a planform representation of the airframe model, also at 4.7% scale was traversed relative to the jet nozzle from downstream to several diameters upstream of the wing trailing edge. Installations at two fan diameters upstream of the wing trailing edge provided only limited shielding in the forward arc at high frequencies for both the axisymmetric and a conventional round nozzle with pylon. This was consistent with phased array measurements suggesting that the high frequency sources are predominantly located near the nozzle exit and, consequently, are amenable to shielding. The mid to low frequencies sources were observed further downstream and shielding was insignificant. Chevrons were designed and used to impact the distribution of sources with the more aggressive design showing a significant upstream migration of the sources in the mid frequency range. Furthermore, the chevrons reduced the low frequency source levels and the typical high frequency increase due to the application of chevron nozzles was successfully shielded. The pylon was further modified with a technology that injects air through the shelf of the pylon which was effective in reducing low frequency noise and moving jet noise sources closer to the nozzle exit. In general, shielding effectiveness varied as a function of cycle condition with the cutback condition producing higher shielding compared to sideline power. The configuration with a more strongly immersed chevron and a pylon oriented opposite to the microphones produced the largest reduction in jet noise. In addition to the jet noise source, the shielding of a broadband point noise source was documented with up to 20 dB of noise reduction at directivity angles directly under the shielding surface.
Propulsion Airframe Aeroacoustic Integration Effects for a Hybrid Wing Body Aircraft Configuration
NASA Technical Reports Server (NTRS)
Czech, Michael J.; Thomas, Russell H; Elkoby, Ronen
2012-01-01
An extensive experimental investigation was performed to study the propulsion airframe aeroacoustic effects of a high bypass ratio engine for a hybrid wing body aircraft configuration where the engine is installed above the wing. The objective was to provide an understanding of the jet noise shielding effectiveness as a function of engine gas condition and location as well as nozzle configuration. A 4.7% scale nozzle of a bypass ratio seven engine was run at characteristic cycle points under static and forward flight conditions. The effect of the pylon and its orientation on jet noise was also studied as a function of bypass ratio and cycle condition. The addition of a pylon yielded significant spectral changes lowering jet noise by up to 4 dB at high polar angles and increasing it by 2 to 3 dB at forward angles. In order to assess jet noise shielding, a planform representation of the airframe model, also at 4.7% scale was traversed such that the jet nozzle was positioned from downstream of to several diameters upstream of the airframe model trailing edge. Installations at two fan diameters upstream of the wing trailing edge provided only limited shielding in the forward arc at high frequencies for both the axisymmetric and a conventional round nozzle with pylon. This was consistent with phased array measurements suggesting that the high frequency sources are predominantly located near the nozzle exit and, consequently, are amenable to shielding. The mid to low frequency sources were observed further downstream and shielding was insignificant. Chevrons were designed and used to impact the distribution of sources with the more aggressive design showing a significant upstream migration of the sources in the mid frequency range. Furthermore, the chevrons reduced the low frequency source levels and the typical high frequency increase due to the application of chevron nozzles was successfully shielded. The pylon was further modified with a technology that injects air through the shelf of the pylon which was effective in reducing low frequency noise and moving jet noise sources closer to the nozzle exit. In general, shielding effectiveness varied as a function of cycle condition with the cutback condition producing higher shielding compared to sideline power. The configuration with a more strongly immersed chevron and a pylon oriented opposite to the microphones produced the largest reduction in jet noise. In addition to the jet noise source, the shielding of a broadband point noise source was documented with up to 20 dB of noise reduction at directivity angles directly under the shielding surface.
Characterisation of a resolution enhancing image inversion interferometer.
Wicker, Kai; Sindbert, Simon; Heintzmann, Rainer
2009-08-31
Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
First Near-infrared Imaging Polarimetry of Young Stellar Objects in the Circinus Molecular Cloud
NASA Astrophysics Data System (ADS)
Kwon, Jungmi; Nakagawa, Takao; Tamura, Motohide; Hough, James H.; Choi, Minho; Kandori, Ryo; Nagata, Tetsuya; Kang, Miju
2018-02-01
We present the results of near-infrared (NIR) linear imaging polarimetry in the J, H, and K s bands of the low-mass star cluster-forming region in the Circinus Molecular Cloud Complex. Using aperture polarimetry of point-like sources, positive detection of 314, 421, and 164 sources in the J, H, and K s bands, respectively, was determined from among 749 sources whose photometric magnitudes were measured. For the source classification of the 133 point-like sources whose polarization could be measured in all 3 bands, a color–color diagram was used. While most of the NIR polarizations of point-like sources are well-aligned and can be explained by dichroic polarization produced by aligned interstellar dust grains in the cloud, 123 highly polarized sources have also been identified with some criteria. The projected direction on the sky of the magnetic field in the Cir-MMS region is indicated by the mean polarization position angles (70°) of the point-like sources in the observed region, corresponding to approximately 1.6× 1.6 pc2. In addition, the magnetic field direction is compared with the outflow orientations associated with Infrared Astronomy Satellite sources, in which two sources were found to be aligned with each other and one source was not. We also show prominent polarization nebulosities over the Cir-MMS region for the first time. Our polarization data have revealed one clear infrared reflection nebula (IRN) and several candidate IRNe in the Cir-MMS field. In addition, the illuminating sources of the IRNe are identified with near- and mid-infrared sources.
Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo
2014-01-01
Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677
Binary Sources and Binary Lenses in Microlensing Surveys of MACHOs
NASA Astrophysics Data System (ADS)
Petrovic, N.; Di Stefano, R.; Perna, R.
2003-12-01
Microlensing is an intriguing phenomenon which may yield information about the nature of dark matter. Early observational searches identified hundreds of microlensing light curves. The data set consisted mainly of point-lens light curves and binary-lens events in which the light curves exhibit caustic crossings. Very few mildly perturbed light curves were observed, although this latter type should constitute the majority of binary lens light curves. Di Stefano (2001) has suggested that the failure to take binary effects into account may have influenced the estimates of optical depth derived from microlensing surveys. The work we report on here is the first step in a systematic analysis of binary lenses and binary sources and their impact on the results of statistical microlensing surveys. In order to asses the problem, we ran Monte-Carlo simulations of various microlensing events involving binary stars (both as the source and as the lens). For each event with peak magnification > 1.34, we sampled the characteristic light curve and recorded the chi squared value when fitting the curve with a point lens model; we used this to asses the perturbation rate. We also recorded the parameters of each system, the maximum magnification, the times at which each light curve started and ended and the number of caustic crossings. We found that both the binarity of sources and the binarity of lenses increased the lensing rate. While the binarity of sources had a negligible effect on the perturbation rates of the light curves, the binarity of lenses had a notable effect. The combination of binary sources and binary lenses produces an observable rate of interesting events exhibiting multiple "repeats" in which the magnification rises above and dips below 1.34 several times. Finally, the binarity of lenses impacted both the durations of the events and the maximum magnifications. This work was supported in part by the SAO intern program (NSF grant AST-9731923) and NASA contracts NAS8-39073 and NAS8-38248 (CXC).
Active noise control using a steerable parametric array loudspeaker.
Tanaka, Nobuo; Tanaka, Motoki
2010-06-01
Arguably active noise control enables the sound suppression at the designated control points, while the sound pressure except the targeted locations is likely to augment. The reason is clear; a control source normally radiates the sound omnidirectionally. To cope with this problem, this paper introduces a parametric array loudspeaker (PAL) which produces a spatially focused sound beam due to the attribute of ultrasound used for carrier waves, thereby allowing one to suppress the sound pressure at the designated point without causing spillover in the whole sound field. First the fundamental characteristics of PAL are overviewed. The scattered pressure in the near field contributed by source strength of PAL is then described, which is needed for the design of an active noise control system. Furthermore, the optimal control law for minimizing the sound pressure at control points is derived, the control effect being investigated analytically and experimentally. With a view to tracking a moving target point, a steerable PAL based upon a phased array scheme is presented, with the result that the generation of a moving zone of quiet becomes possible without mechanically rotating the PAL. An experiment is finally conducted, demonstrating the validity of the proposed method.
EFFECT OF SOOT, TEMPERATURE AND RESIDENCE TIME ON PCDD/F FORMATION
Polychlorinated dibenzo-p-dioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs) are formed during thermal processes in the presence of fly ash that acts as an oxidation catalyst as well as a carbon and chloride source. Recent laboratory studies pointed out the importance of s...
Design and Evaluation of Large-Aperture Gallium Fixed-Point Blackbody
NASA Astrophysics Data System (ADS)
Khromchenko, V. B.; Mekhontsev, S. N.; Hanssen, L. M.
2009-02-01
To complement existing water bath blackbodies that now serve as NIST primary standard sources in the temperature range from 15 °C to 75 °C, a gallium fixed-point blackbody has been recently built. The main objectives of the project included creating an extended-area radiation source with a target emissivity of 0.9999 capable of operating either inside a cryo-vacuum chamber or in a standard laboratory environment. A minimum aperture diameter of 45 mm is necessary for the calibration of radiometers with a collimated input geometry or large spot size. This article describes the design and performance evaluation of the gallium fixed-point blackbody, including the calculation and measurements of directional effective emissivity, estimates of uncertainty due to the temperature drop across the interface between the pure metal and radiating surfaces, as well as the radiometrically obtained spatial uniformity of the radiance temperature and the melting plateau stability. Another important test is the measurement of the cavity reflectance, which was achieved by using total integrated scatter measurements at a laser wavelength of 10.6 μm. The result allows one to predict the performance under the low-background conditions of a cryo-chamber. Finally, results of the spectral radiance comparison with the NIST water-bath blackbody are provided. The experimental results are in good agreement with predicted values and demonstrate the potential of our approach. It is anticipated that, after completion of the characterization, a similar source operating at the water triple point will be constructed.
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Birmingham, Wendy C; Holt-Lunstad, Julianne
2018-04-05
There is a rich literature on social support and physical health, but research has focused primarily on the protective effects of social relationship. The stress buffering model asserts that relationships may be protective by being a source of support when coping with stress, thereby blunting health relevant physiological responses. Research also indicates relationships can be a source of stress, also influencing health. In other words, the social buffering influence may have a counterpart, a social aggravating influence that has an opposite or opposing effect. Drawing upon existing conceptual models, we expand these to delineate how social relationships may influence stress processes and ultimately health. This review summarizes the existing literature that points to the potential deleterious physiological effects of our relationships when they are sources of stress or exacerbate stress. Copyright © 2018 Elsevier B.V. All rights reserved.
Pregger, Thomas; Friedrich, Rainer
2009-02-01
Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.
NASA Astrophysics Data System (ADS)
Grigg, R. W.
1995-11-01
The effects of natural and anthropogenic stress need to be separated before coral reef ecosystems can be effectively managed. In this paper, a 25 year case history of coral reefs in an urban embayment (Mamala Bay) off Honolulu, Hawaii is described and differences between natural and man-induced stress are distinguished. Mamala Bay is a 30 km long shallow coastal bay bordering the southern (leeward) shore of Oahu and the city of Honolulu in the Hawaiian Islands. During the last 25 years, this area has been hit by two magnitude 5 hurricane events (winds > 240 km/h) generating waves in excess of 7.5 m. Also during this period, two large sewer outfalls have discharged up to 90 million gallons per day (mgd) or (360 × 106 L/day) of point source pollution into the bay. Initially the discharge was raw sewage, but since 1977 it has received advanced primary treatment. Non-point source run-off from the Honolulu watershed also enters the bay on a daily basis. The results of the study show that discharge of raw sewage had a serious but highly localized impact on shallow (˜10 m) reef corals in the bay prior to 1977. After 1977, when treatment was upgraded to the advanced primary level and outfalls were extended to deep water (> 65 m), impacts to reef corals were no longer significant. No measurable effects of either point or non-point source pollution on coral calcification, growth, species composition, diversity or community structure related to pollution can now be detected. Conversely the effects of hurricane waves in 1982 and 1992 together caused major physical destruction to the reefs. In 1982, average coral cover of well-developed offshore reefs dropped from 60-75% to 5-15%. Only massive species in high relief areas survived. Today, recovery is occurring, and notwithstanding major future disturbance events, long-term biological processes should eventually return the coral ecosystems to a more mature successional stage. This case history illustrates the complex nature of the cumulative effects of natural and anthropogenic stress on coral reefs and the need for a long-term data base before the status of a coral reef can be properly interpreted.
Interventions to improve water quality for preventing diarrhoea.
Clasen, Thomas F; Alexander, Kelly T; Sinclair, David; Boisson, Sophie; Peletz, Rachel; Chang, Howard H; Majorin, Fiona; Cairncross, Sandy
2015-10-20
Diarrhoea is a major cause of death and disease, especially among young children in low-income countries. In these settings, many infectious agents associated with diarrhoea are spread through water contaminated with faeces.In remote and low-income settings, source-based water quality improvement includes providing protected groundwater (springs, wells, and bore holes), or harvested rainwater as an alternative to surface sources (rivers and lakes). Point-of-use water quality improvement interventions include boiling, chlorination, flocculation, filtration, or solar disinfection, mainly conducted at home. To assess the effectiveness of interventions to improve water quality for preventing diarrhoea. We searched the Cochrane Infectious Diseases Group Specialized Register (11 November 2014), CENTRAL (the Cochrane Library, 7 November 2014), MEDLINE (1966 to 10 November 2014), EMBASE (1974 to 10 November 2014), and LILACS (1982 to 7 November 2014). We also handsearched relevant conference proceedings, contacted researchers and organizations working in the field, and checked references from identified studies through 11 November 2014. Randomized controlled trials (RCTs), quasi-RCTs, and controlled before-and-after studies (CBA) comparing interventions aimed at improving the microbiological quality of drinking water with no intervention in children and adults. Two review authors independently assessed trial quality and extracted data. We used meta-analyses to estimate pooled measures of effect, where appropriate, and investigated potential sources of heterogeneity using subgroup analyses. We assessed the quality of evidence using the GRADE approach. Forty-five cluster-RCTs, two quasi-RCTs, and eight CBA studies, including over 84,000 participants, met the inclusion criteria. Most included studies were conducted in low- or middle-income countries (LMICs) (50 studies) with unimproved water sources (30 studies) and unimproved or unclear sanitation (34 studies). The primary outcome in most studies was self-reported diarrhoea, which is at high risk of bias due to the lack of blinding in over 80% of the included studies. Source-based water quality improvementsThere is currently insufficient evidence to know if source-based improvements such as protected wells, communal tap stands, or chlorination/filtration of community sources consistently reduce diarrhoea (one cluster-RCT, five CBA studies, very low quality evidence). We found no studies evaluating reliable piped-in water supplies delivered to households. Point-of-use water quality interventionsOn average, distributing water disinfection products for use at the household level may reduce diarrhoea by around one quarter (Home chlorination products: RR 0.77, 95% CI 0.65 to 0.91; 14 trials, 30,746 participants, low quality evidence; flocculation and disinfection sachets: RR 0.69, 95% CI 0.58 to 0.82, four trials, 11,788 participants, moderate quality evidence). However, there was substantial heterogeneity in the size of the effect estimates between individual studies.Point-of-use filtration systems probably reduce diarrhoea by around a half (RR 0.48, 95% CI 0.38 to 0.59, 18 trials, 15,582 participants, moderate quality evidence). Important reductions in diarrhoea episodes were shown with ceramic filters, biosand systems and LifeStraw® filters; (Ceramic: RR 0.39, 95% CI 0.28 to 0.53; eight trials, 5763 participants, moderate quality evidence; Biosand: RR 0.47, 95% CI 0.39 to 0.57; four trials, 5504 participants, moderate quality evidence; LifeStraw®: RR 0.69, 95% CI 0.51 to 0.93; three trials, 3259 participants, low quality evidence). Plumbed in filters have only been evaluated in high-income settings (RR 0.81, 95% CI 0.71 to 0.94, three trials, 1056 participants, fixed effects model).In low-income settings, solar water disinfection (SODIS) by distribution of plastic bottles with instructions to leave filled bottles in direct sunlight for at least six hours before drinking probably reduces diarrhoea by around a third (RR 0.62, 95% CI 0.42 to 0.94; four trials, 3460 participants, moderate quality evidence).In subgroup analyses, larger effects were seen in trials with higher adherence, and trials that provided a safe storage container. In most cases, the reduction in diarrhoea shown in the studies was evident in settings with improved and unimproved water sources and sanitation. Interventions that address the microbial contamination of water at the point-of-use may be important interim measures to improve drinking water quality until homes can be reached with safe, reliable, piped-in water connections. The average estimates of effect for each individual point-of-use intervention generally show important effects. Comparisons between these estimates do not provide evidence of superiority of one intervention over another, as such comparisons are confounded by the study setting, design, and population.Further studies assessing the effects of household connections and chlorination at the point of delivery will help improve our knowledge base. As evidence suggests effectiveness improves with adherence, studies assessing programmatic approaches to optimising coverage and long-term utilization of these interventions among vulnerable populations could also help strategies to improve health outcomes.
NASA Astrophysics Data System (ADS)
Farries, Mark; Ward, Jon; Valle, Stefano; Stephens, Gary; Moselund, Peter; van der Zanden, Koen; Napier, Bruce
2015-06-01
Mid-IR imaging spectroscopy has the potential to offer an effective tool for early cancer diagnosis. Current development of bright super-continuum sources, narrow band acousto-optic tunable filters and fast cameras have made feasible a system that can be used for fast diagnosis of cancer in vivo at point of care. The performance of a proto system that has been developed under the Minerva project is described.
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2
NASA Astrophysics Data System (ADS)
Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.
2016-12-01
Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology
Wang, Zhu-lou; Zhang, Wan-jie; Li, Chen-xi; Chen, Wen-liang; Xu, Ke-xin
2015-02-01
There are some challenges in near-infrared non-invasive blood glucose measurement, such as the low signal to noise ratio of instrument, the unstable measurement conditions, the unpredictable and irregular changes of the measured object, and etc. Therefore, it is difficult to extract the information of blood glucose concentrations from the complicated signals accurately. Reference measurement method is usually considered to be used to eliminate the effect of background changes. But there is no reference substance which changes synchronously with the anylate. After many years of research, our research group has proposed the floating reference method, which is succeeded in eliminating the spectral effects induced by the instrument drifts and the measured object's background variations. But our studies indicate that the reference-point will changes following the changing of measurement location and wavelength. Therefore, the effects of floating reference method should be verified comprehensively. In this paper, keeping things simple, the Monte Carlo simulation employing Intralipid solution with the concentrations of 5% and 10% is performed to verify the effect of floating reference method used into eliminating the consequences of the light source drift. And the light source drift is introduced through varying the incident photon number. The effectiveness of the floating reference method with corresponding reference-points at different wavelengths in eliminating the variations of the light source drift is estimated. The comparison of the prediction abilities of the calibration models with and without using this method shows that the RMSEPs of the method are decreased by about 98.57% (5%Intralipid)and 99.36% (10% Intralipid)for different Intralipid. The results indicate that the floating reference method has obvious effect in eliminating the background changes.
NASA Astrophysics Data System (ADS)
Martin, E. R.; Dou, S.; Lindsey, N.; Chang, J. P.; Biondi, B. C.; Ajo Franklin, J. B.; Wagner, A. M.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Williams, E. F.
2016-12-01
Localized strong sources of noise in an array have been shown to cause artifacts in Green's function estimates obtained via cross-correlation. Their effect is often reduced through the use of cross-coherence. Beyond independent localized sources, temporally or spatially correlated sources of noise frequently occur in practice but violate basic assumptions of much of the theory behind ambient noise Green's function retrieval. These correlated noise sources can occur in urban environments due to transportation infrastructure, or in areas around industrial operations like pumps running at CO2 sequestration sites or oil and gas drilling sites. Better understanding of these artifacts should help us develop and justify methods for their automatic removal from Green's function estimates. We derive expected artifacts in cross-correlations from several distributions of correlated noise sources including point sources that are exact time-lagged repeats of each other and Gaussian-distributed in space and time with covariance that exponentially decays. Assuming the noise distribution stays stationary over time, the artifacts become more coherent as more ambient noise is included in the Green's function estimates. We support our results with simple computational models. We observed these artifacts in Green's function estimates from a 2015 ambient noise study in Fairbanks, AK where a trenched distributed acoustic sensing (DAS) array was deployed to collect ambient noise alongside a road with the goal of developing a permafrost thaw monitoring system. We found that joints in the road repeatedly being hit by cars travelling at roughly the speed limit led to artifacts similar to those expected when several points are time-lagged copies of each other. We also show test results of attenuating the effects of these sources during time-lapse monitoring of an active thaw test in the same location with noise detected by a 2D trenched DAS array.
Triangulation in aetiological epidemiology.
Lawlor, Debbie A; Tilling, Kate; Davey Smith, George
2016-12-01
Triangulation is the practice of obtaining more reliable answers to research questions through integrating results from several different approaches, where each approach has different key sources of potential bias that are unrelated to each other. With respect to causal questions in aetiological epidemiology, if the results of different approaches all point to the same conclusion, this strengthens confidence in the finding. This is particularly the case when the key sources of bias of some of the approaches would predict that findings would point in opposite directions if they were due to such biases. Where there are inconsistencies, understanding the key sources of bias of each approach can help to identify what further research is required to address the causal question. The aim of this paper is to illustrate how triangulation might be used to improve causal inference in aetiological epidemiology. We propose a minimum set of criteria for use in triangulation in aetiological epidemiology, summarize the key sources of bias of several approaches and describe how these might be integrated within a triangulation framework. We emphasize the importance of being explicit about the expected direction of bias within each approach, whenever this is possible, and seeking to identify approaches that would be expected to bias the true causal effect in different directions. We also note the importance, when comparing results, of taking account of differences in the duration and timing of exposures. We provide three examples to illustrate these points. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.
Point to point multispectral light projection applied to cultural heritage
NASA Astrophysics Data System (ADS)
Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.
2017-09-01
Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".
Point kernel calculations of skyshine exposure rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, M.L.; Shultis, J.K.
1982-02-01
A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.
A deeper look at the X-ray point source population of NGC 4472
NASA Astrophysics Data System (ADS)
Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.
2017-10-01
In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.
Impacts of drought on the quality of surface water of the basin
NASA Astrophysics Data System (ADS)
Huang, B. B.; Yan, D. H.; Wang, H.; Cheng, B. F.; Cui, X. H.
2013-11-01
Under the background of climate change and human's activities, there has been presenting an increase both in the frequency of droughts and the range of their impacts. Droughts may give rise to a series of resources, environmental and ecological effects, i.e. water shortage, water quality deterioration as well as the decrease in the diversity of aquatic organisms. This paper, above all, identifies the impact mechanism of drought on the surface water quality of the basin, and then systematically studies the laws of generation, transfer, transformation and degradation of pollutants during the drought, finding out that the alternating droughts and floods stage is the critical period during which the surface water quality is affected. Secondly, through employing indoor orthogonality experiments, serving drought degree, rainfall intensity and rainfall duration as the main elements and designing various scenario models, the study inspects the effects of various factors on the nitrogen loss in soil as well as the loss of non-point sources pollution and the leaching rate of nitrogen under the different alternating scenarios of drought and flood. It comes to the conclusion that the various factors and the loss of non-point source pollution are positively correlated, and under the alternating scenarios of drought and flood, there is an exacerbation in the loss of ammonium nitrogen and nitrate nitrogen in soil, which generates the transfer and transformation mechanisms of non-point source pollution from a micro level. Finally, by employing the data of Nenjiang river basin, the paper assesses the impacts of drought on the surface water quality from a macro level.
Real-time determination of the worst tsunami scenario based on Earthquake Early Warning
NASA Astrophysics Data System (ADS)
Furuya, Takashi; Koshimura, Shunichi; Hino, Ryota; Ohta, Yusaku; Inoue, Takuya
2016-04-01
In recent years, real-time tsunami inundation forecasting has been developed with the advances of dense seismic monitoring, GPS Earth observation, offshore tsunami observation networks, and high-performance computing infrastructure (Koshimura et al., 2014). Several uncertainties are involved in tsunami inundation modeling and it is believed that tsunami generation model is one of the great uncertain sources. Uncertain tsunami source model has risk to underestimate tsunami height, extent of inundation zone, and damage. Tsunami source inversion using observed seismic, geodetic and tsunami data is the most effective to avoid underestimation of tsunami, but needs to expect more time to acquire the observed data and this limitation makes difficult to terminate real-time tsunami inundation forecasting within sufficient time. Not waiting for the precise tsunami observation information, but from disaster management point of view, we aim to determine the worst tsunami source scenario, for the use of real-time tsunami inundation forecasting and mapping, using the seismic information of Earthquake Early Warning (EEW) that can be obtained immediately after the event triggered. After an earthquake occurs, JMA's EEW estimates magnitude and hypocenter. With the constraints of earthquake magnitude, hypocenter and scaling law, we determine possible multi tsunami source scenarios and start searching the worst one by the superposition of pre-computed tsunami Green's functions, i.e. time series of tsunami height at offshore points corresponding to 2-dimensional Gaussian unit source, e.g. Tsushima et al., 2014. Scenario analysis of our method consists of following 2 steps. (1) Searching the worst scenario range by calculating 90 scenarios with various strike and fault-position. From maximum tsunami height of 90 scenarios, we determine a narrower strike range which causes high tsunami height in the area of concern. (2) Calculating 900 scenarios that have different strike, dip, length, width, depth and fault-position. Note that strike is limited with the range obtained from 90 scenarios calculation. From 900 scenarios, we determine the worst tsunami scenarios from disaster management point of view, such as the one with shortest travel time and the highest water level. The method was applied to a hypothetical-earthquake, and verified if it can effectively search the worst tsunami source scenario in real-time, to be used as an input of real-time tsunami inundation forecasting.
Qiao, Mu; Liu, Honglin; Pang, Guanghui; Han, Shensheng
2017-08-29
Manipulating light non-invasively through inhomogeneous media is an attractive goal in many disciplines. Wavefront shaping and optical phase conjugation can focus light to a point. Transmission matrix method can control light on multiple output modes simultaneously. Here we report a non-invasive approach which enables three-dimension (3D) light control between two turbid layers. A digital optical phase conjugation mirror measured and conjugated the diffused wavefront, which originated from a quasi-point source on the front turbid layer and passed through the back turbid layer. And then, because of memory effect, the phase-conjugated wavefront could be used as a carrier wave to transport a pre-calculated wavefront through the back turbid layer. The pre-calculated wavefront could project a desired 3D light field inside the sample, which, in our experiments, consisted of two 220-grid ground glass plates spaced by a 20 mm distance. The controllable range of light, according to the memory effect, was calculated to be 80 mrad in solid angle and 16 mm on z-axis. Due to the 3D light control ability, our approach may find applications in photodynamic therapy and optogenetics. Besides, our approach can also be combined with ghost imaging or compressed sensing to achieve 3D imaging between turbid layers.
Water quality at points-of-use in the Galapagos Islands.
Gerhard, William A; Choi, Wan Suk; Houck, Kelly M; Stewart, Jill R
2017-04-01
Piped drinking water is often considered a gold standard for protecting public health but research is needed to explicitly evaluate the effect of centralized treatment systems on water quality in developing world settings. This study examined the effect of a new drinking water treatment plant (DWTP) on microbial drinking water quality at the point-of-use on San Cristobal Island, Galapagos using fecal indicator bacteria total coliforms and Escherichia coli. Samples were collected during six collection periods before and after operation of the DWTP began from the freshwater sources (n=4), the finished water (n=6), and 50 sites throughout the distribution system (n=287). This study found that there was a significant decrease in contamination by total coliforms (two orders of magnitude) and E. coli (one order of magnitude) after DWTP operation began (p<0.001). However, during at least one post-construction collection cycle, total coliforms and E. coli were still found at 66% and 28% of points-of-use (n=50), respectively. During the final collection period, conventional methods were augmented with human-specific Bacteroides assays - validated herein - with the goal of elucidating possible microbial contamination sources. Results show that E. coli contamination was not predictive of contamination by human wastes and suggests that observed indicator bacteria contamination may have environmental origins. Together these findings highlight the necessity of a holistic approach to drinking water infrastructure improvements in order to deliver high quality water through to the point-of-use. Copyright © 2017 Elsevier GmbH. All rights reserved.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noises.
NASA Astrophysics Data System (ADS)
Wong, Raymond Lee Man
This analytical and experimental study explores concepts for jet noise shielding. Model experiments centre on solid planar shields, simulating engine-over-wing installations, and 'sugar scoop' shields. Tradeoff on effective shielding length is set by interference 'edge noise' as the shield trailing edge approaches the spreading jet. Edge noise is minimized by (i) hyperbolic cutouts which trim off the portions of most intense interference between the jet flow and the barrier and (ii) hybrid shields--a thermal refractive extension (a flame); for (ii) the tradeoff is combustion noise. In general, shielding attenuation increases steadily with frequency, following low frequency enhancement by edge noise. Although broadband attenuation is typically only several dB, the reduction of the subjectively weighted perceived noise levels is higher. In addition, calculated ground contours of peak PN dB show a substantial contraction due to shielding: this reaches 66% for one of the 'sugar scoop' shields for the 90 PN dB contour. The experiments are complemented by analytical predictions. They are divided into an engineering scheme for jet noise shielding and more rigorous analysis for point source shielding. The former approach combines point source shielding with a suitable jet source distribution. The results are synthesized into a predictive algorithm for jet noise shielding: the jet is modelled as a line distribution of incoherent sources with narrow band frequency (TURN)(axial distance)('-1). The predictive version agrees well with experiment (1 to 1.5 dB) up to moderate frequencies. The insertion loss deduced from the point source measurements for semi-infinite as well as finite rectangular shields agrees rather well with theoretical calculation based on the exact half plane solution and the superposition of asymptotic closed-form solutions. An approximate theory, the Maggi-Rubinowicz line integral, is found to yield reasonable predictions for thin barriers including cutouts if a certain correction is applied. The more exact integral equation approach (solved numerically) is applied to a more demanding geometry: a half round sugar scoop shield. It is found that the solutions of integral equation derived from Helmholtz formula in normal derivative form show satisfactory agreement with measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y; Cai, J; Meltsner, S
2016-06-15
Purpose: The Varian tandem and ring applicators are used to deliver HDR Ir-192 brachytherapy for cervical cancer. The source path within the ring is hard to predict due to the larger interior ring lumen. Some studies showed the source could be several millimeters different from planned positions, while other studies demonstrated minimal dosimetric impact. A global shift can be applied to limit the effect of positioning offsets. The purpose of this study was to assess the necessities of implementing a global source shift using Monte Carlo (MC) simulations. Methods: The MCNP5 radiation transport code was used for all MC simulations.more » To accommodate TG-186 guidelines and eliminate inter-source attenuation, a BrachyVision plan with 10 dwell positions (0.5cm step sizes) was simulated as the summation of 10 individual sources with equal dwell times for simplification. To simplify the study, the tandem was also excluded from the MC model. Global shifts of ±0.1, ±0.3, ±0.5 cm were then simulated as distal and proximal from the reference positions. Dose was scored in water for all MC simulations and was normalized to 100% at the normalization point 0.5 cm from the cap in the ring plane. For dose comparison, Point A was 2 cm caudal from the buildup cap and 2 cm lateral on either side of the ring axis. With seventy simulations, 108 photon histories gave a statistical uncertainties (k=1) <2% for (0.1 cm)3 voxels. Results: Compared to no global shift, average Point A doses were 0.0%, 0.4%, and 2.2% higher for distal global shifts, and 0.4%, 2.8%, and 5.1% higher for proximal global shifts, respectively. The MC Point A doses differed by < 1% when compared to BrachyVision. Conclusion: Dose variations were not substantial for ±0.3 cm global shifts, which is common in clinical practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chofor, N; Poppe, B; Nebah, F
Purpose: In a brachytherapy photon field in water the fluence-averaged mean photon energy Em at the point of measurement correlates with the radiation quality correction factor kQ of a non water-equivalent detector. To support the experimental assessment of Em, we show that the normalized signal ratio NSR of a pair of radiation detectors, an unshielded silicon diode and a diamond detector can serve to measure quantity Em in a water phantom at a Ir-192 unit. Methods: Photon fluence spectra were computed in EGSnrc based on a detailed model of the GammaMed source. Factor kQ was calculated as the ratio ofmore » the detector's spectrum-weighted responses under calibration conditions at a 60Co unit and under brachytherapy conditions at various radial distances from the source. The NSR was investigated for a pair of a p-type unshielded silicon diode 60012 and a synthetic single crystal diamond detector 60019 (both PTW Freiburg). Each detector was positioned according to its effective point of measurement, with its axis facing the source. Lateral signal profiles were scanned under complete scatter conditions, and the NSR was determined as the quotient of the signal ratio under application conditions x and that at position r-ref = 1 cm. Results: The radiation quality correction factor kQ shows a close correlation with the mean photon energy Em. The NSR of the diode/diamond pair changes by a factor of two from 0–18 cm from the source, while Em drops from 350 to 150 keV. Theoretical and measured NSR profiles agree by ± 2 % for points within 5 cm from the source. Conclusion: In the presence of the close correlation between radiation quality correction factor kQ and photon mean energy Em, the NSR provides a practical means of assessing Em under clinical conditions. Precise detector positioning is the major challenge.« less
Mapping the spatio-temporal risk of lead exposure in apex species for more effective mitigation
Mateo-Tomás, Patricia; Olea, Pedro P.; Jiménez-Moreno, María; Camarero, Pablo R.; Sánchez-Barbudo, Inés S.; Rodríguez Martín-Doimeadios, Rosa C.; Mateo, Rafael
2016-01-01
Effective mitigation of the risks posed by environmental contaminants for ecosystem integrity and human health requires knowing their sources and spatio-temporal distribution. We analysed the exposure to lead (Pb) in griffon vulture Gyps fulvus—an apex species valuable as biomonitoring sentinel. We determined vultures' lead exposure and its main sources by combining isotope signatures and modelling analyses of 691 bird blood samples collected over 5 years. We made yearlong spatially explicit predictions of the species risk of lead exposure. Our results highlight elevated lead exposure of griffon vultures (i.e. 44.9% of the studied population, approximately 15% of the European, showed lead blood levels more than 200 ng ml−1) partly owing to environmental lead (e.g. geological sources). These exposures to environmental lead of geological sources increased in those vultures exposed to point sources (e.g. lead-based ammunition). These spatial models and pollutant risk maps are powerful tools that identify areas of wildlife exposure to potentially harmful sources of lead that could affect ecosystem and human health. PMID:27466455
Rio Grande valley Colorado new Mexico and Texas
Ellis, Sherman R.; Levings, Gary W.; Carter, Lisa F.; Richey, Steven F.; Radell, Mary Jo
1993-01-01
Two structural settings are found in the study unit: alluvial basins and bedrock basins. The alluvial basins can have through-flowing surface water or be closed basins. The discussion of streamflow and water quality for the surface-water system is based on four river reaches for the 750 miles of the main stem. the quality of the ground water is affected by both natural process and human activities and by nonpoint and point sources. Nonpoint sources for surface water include agriculture, hydromodification, and mining operations; point sources are mainly discharge from wastewater treatment plants. Nonpoint sources for ground water include agriculture and septic tanks and cesspools; point sources include leaking underground storage tanks, unlined or manure-lined holding ponds used for disposal of dairy wastes, landfills, and mining operations.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
Roger Ryder; Pamela Edwards; Pamela Edwards
2006-01-01
Forestry operations do not have permitting requirements under the Clean Water Act because there is a ccsilvicultural exemption" given in that law, as long as best management practices (BMPs) are used to help control non-point source pollution. However, states' monitoring of BMP effectiveness often has been sporadic and anecdotal, and the procedures used have...
The sound field of a rotating dipole in a plug flow.
Wang, Zhao-Huan; Belyaev, Ivan V; Zhang, Xiao-Zheng; Bi, Chuan-Xing; Faranosov, Georgy A; Dowell, Earl H
2018-04-01
An analytical far field solution for a rotating point dipole source in a plug flow is derived. The shear layer of the jet is modelled as an infinitely thin cylindrical vortex sheet and the far field integral is calculated by the stationary phase method. Four numerical tests are performed to validate the derived solution as well as to assess the effects of sound refraction from the shear layer. First, the calculated results using the derived formulations are compared with the known solution for a rotating dipole in a uniform flow to validate the present model in this fundamental test case. After that, the effects of sound refraction for different rotating dipole sources in the plug flow are assessed. Then the refraction effects on different frequency components of the signal at the observer position, as well as the effects of the motion of the source and of the type of source are considered. Finally, the effect of different sound speeds and densities outside and inside the plug flow is investigated. The solution obtained may be of particular interest for propeller and rotor noise measurements in open jet anechoic wind tunnels.
Reducing Phosphorus Runoff from Biosolids with Water Treatment Residuals
USDA-ARS?s Scientific Manuscript database
A large fraction of the biosolids produced in the U.S. are placed in landfills or incinerated to avoid potential water quality problems associated with non-point source phosphorus (P) runoff. The objective of this study was to determine the effect of various chemical amendments on P runoff from bi...
Estuarine and marine coastlines are receiving waters for many anthropogenic substances. Concentrations of many of these contaminants have been diminished by regulatory control of effluents, but there is concern that continuing inputs (non-point sources) and contaminants contained...
Non-point source (NPS) pollution is one of the leading causes of water quality impairment within the United States. Conservation, restoration and altered management (CRAM) practices may effectively reduce NPS pollutants discharge into receiving water bodies and enhance local and ...
SSL Pricing and Efficacy Trend Analysis for Utility Program Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuenge, J. R.
2013-10-01
Report to help utilities and energy efficiency organizations forecast the order in which important SSL applications will become cost-effective and estimate when each "tipping point" will be reached. Includes performance trend analysis from DOE's LED Lighting Facts® and CALiPER programs plus cost analysis from various sources.
Proceedings of the symposium: The Forested Wetlands of the Southern United States
Donald D. Hook; Russ Lea; [Editors
1989-01-01
Twenty-five papers are presented in five categories: Non point Sources of Pollution and the Functions and Values; Best Management Practices for Forested Wetlands; Streamside Management Strategies; Sensitive Areas Management; and Balancing Best Management Practices and Water Quality Standards for Feasibility, Economic, and Functional Effectiveness.
Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...
The volatilization of SPLAT® for use in pheromone mating disruption of cranberry pests
USDA-ARS?s Scientific Manuscript database
The paraffin-based pheromone carrier, SPLAT, was tested for its volatilization rate. A multi-factorial study was initiated to examine the interactive effects of point-source size, shape, and duration. The study showed that tremendous amounts of the pheromone were released early, followed by a gradua...
SUBPIXEL-SCALE RAINFALL VARIABILITY AND THE EFFECTS ON SEPARATION OF RADAR AND GAUGE RAINFALL ERRORS
One of the primary sources of the discrepancies between radar-based rainfall estimates and rain gauge measurements is the point-area difference, i.e., the intrinsic difference in the spatial dimensions of the rainfall fields that the respective data sets are meant to represent. ...
FARSITE: Fire Area Simulator-model development and evaluation
Mark A. Finney
1998-01-01
A computer simulation model, FARSITE, includes existing fire behavior models for surface, crown, spotting, point-source fire acceleration, and fuel moisture. The model's components and assumptions are documented. Simulations were run for simple conditions that illustrate the effect of individual fire behavior models on two-dimensional fire growth.
Excess TDS/Major Ionic Stress/Elevated Conductivities appeared increasing in streams in Central and Eastern Appalachia. Direct discharges from permitted point sources and regional interest in setting eco-based effluent guidelines/aquatic life criteria, as well as potential differ...
Nutrient overenrichment from agricultural and urban point and nonpoint sources, including urban stormwter, is a leading cause of impairment to our nation's rivers, lakes, and coastal waters. For waters that do not currently meet existing water quality standards, The USEPA's TMDL ...
Momentum and energy transport by waves in the solar atmosphere and solar wind
NASA Technical Reports Server (NTRS)
Jacques, S. A.
1977-01-01
The fluid equations for the solar wind are presented in a form which includes the momentum and energy flux of waves in a general and consistent way. The concept of conservation of wave action is introduced and is used to derive expressions for the wave energy density as a function of heliocentric distance. The explicit form of the terms due to waves in both the momentum and energy equations are given for radially propagating acoustic, Alfven, and fast mode waves. The effect of waves as a source of momentum is explored by examining the critical points of the momentum equation for isothermal spherically symmetric flow. We find that the principal effect of waves on the solutions is to bring the critical point closer to the sun's surface and to increase the Mach number at the critical point. When a simple model of dissipation is included for acoustic waves, in some cases there are multiple critical points.
A moving medium formulation for prediction of propeller noise at incidence
NASA Astrophysics Data System (ADS)
Ghorbaniasl, Ghader; Lacor, Chris
2012-01-01
This paper presents a time domain formulation for the sound field radiated by moving bodies in a uniform steady flow with arbitrary orientation. The aim is to provide a formulation for prediction of noise from body so that effects of crossflow on a propeller can be modeled in the time domain. An established theory of noise generation by a moving source is combined with the moving medium Green's function for derivation of the formulation. A formula with Doppler factor is developed because it is more easily interpreted and is more helpful in examining the physic of systems. Based on the technique presented, the source of asymmetry of the sound field can be explained in terms of physics of a moving source. It is shown that the derived formulation can be interpreted as an extension of formulation 1 and 1A of Farassat based on the Ffowcs Williams and Hawkings (FW-H) equation for moving medium problems. Computational results for a stationary monopole and dipole point source in moving medium, a rotating point force in crossflow, a model of helicopter blade at incidence and a propeller case with subsonic tips at incidence verify the formulation.
Wu, Yiping; Chen, Ji
2013-01-01
Understanding the physical processes of point source (PS) and nonpoint source (NPS) pollution is critical to evaluate river water quality and identify major pollutant sources in a watershed. In this study, we used the physically-based hydrological/water quality model, Soil and Water Assessment Tool, to investigate the influence of PS and NPS pollution on the water quality of the East River (Dongjiang in Chinese) in southern China. Our results indicate that NPS pollution was the dominant contribution (>94%) to nutrient loads except for mineral phosphorus (50%). A comprehensive Water Quality Index (WQI) computed using eight key water quality variables demonstrates that water quality is better upstream than downstream despite the higher level of ammonium nitrogen found in upstream waters. Also, the temporal (seasonal) and spatial distributions of nutrient loads clearly indicate the critical time period (from late dry season to early wet season) and pollution source areas within the basin (middle and downstream agricultural lands), which resource managers can use to accomplish substantial reduction of NPS pollutant loadings. Overall, this study helps our understanding of the relationship between human activities and pollutant loads and further contributes to decision support for local watershed managers to protect water quality in this region. In particular, the methods presented such as integrating WQI with watershed modeling and identifying the critical time period and pollutions source areas can be valuable for other researchers worldwide.
NASA Astrophysics Data System (ADS)
Steenhuisen, Frits; Wilson, Simon J.
2015-07-01
Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national (geo-referenced) emission inventories and also to other resources that can be employed when such national inventories are lacking.
Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog
NASA Technical Reports Server (NTRS)
Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.
1995-01-01
A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.
Atmospheric scattering of middle uv radiation from an internal source.
Meier, R R; Lee, J S; Anderson, D E
1978-10-15
A Monte Carlo model has been developed which simulates the multiple-scattering of middle-uv radiation in the lower atmosphere. The source of radiation is assumed to be monochromatic and located at a point. The physical effects taken into account in the model are Rayleigh and Mie scattering, pure absorption by particulates and trace atmospheric gases, and ground albedo. The model output consists of the multiply scattered radiance as a function of look-angle of a detector located within the atmosphere. Several examples are discussed, and comparisons are made with direct-source and single-scattered contributions to the signal received by the detector.
NASA Astrophysics Data System (ADS)
Shi, Chang-Sheng; Zhang, Shuang-Nan; Li, Xiang-Dong
2018-05-01
We recalculate the modes of the magnetohydrodynamics (MHD) waves in the MHD model (Shi, Zhang & Li 2014) of the kilohertz quasi-periodic oscillations (kHz QPOs) in neutron star low mass X-ray binaries (NS-LMXBs), in which the compressed magnetosphere is considered. A method on point-by-point scanning for every parameter of a normal LMXBs is proposed to determine the wave number in a NS-LMXB. Then dependence of the twin kHz QPO frequencies on accretion rates (\\dot{M}) is obtained with the wave number and magnetic field (B*) determined by our method. Based on the MHD model, a new explanation of the parallel tracks, i.e. the slowly varying effective magnetic field leads to the shift of parallel tracks in a source, is presented. In this study, we obtain a simple power-law relation between the kHz QPO frequencies and \\dot{M}/B_{\\ast }^2 in those sources. Finally, we study the dependence of kHz quasi-periodic oscillation frequencies on the spin, mass and radius of a neutron star. We find that the effective magnetic field, the spin, mass and radius of a neutron star lead to the parallel tracks in different sources.
Rousta, Kamran; Bolton, Kim; Lundin, Magnus; Dahlén, Lisa
2015-06-01
The present study measures the participation of households in a source separation scheme and, in particular, if the household's application of the scheme improved after two interventions: (a) shorter distance to the drop-off point and (b) easy access to correct sorting information. The effect of these interventions was quantified and, as far as possible, isolated from other factors that can influence the recycling behaviour. The study was based on households located in an urban residential area in Sweden, where waste composition studies were performed before and after the interventions by manual sorting (pick analysis). Statistical analyses of the results indicated a significant decrease (28%) of packaging and newsprint in the residual waste after establishing a property close collection system (intervention (a)), as well as significant decrease (70%) of the miss-sorted fraction in bags intended for food waste after new information stickers were introduced (intervention (b)). Providing a property close collection system to collect more waste fractions as well as finding new communication channels for information about sorting can be used as tools to increase the source separation ratio. This contribution also highlights the need to evaluate the effects of different types of information and communication concerning sorting instructions in a property close collection system. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro
2016-07-01
Numerical analysis of the rotation of an ultrasonically levitated droplet with a free surface boundary is discussed. The ultrasonically levitated droplet is often reported to rotate owing to the surface tangential component of acoustic radiation force. To observe the torque from an acoustic wave and clarify the mechanism underlying the phenomena, it is effective to take advantage of numerical simulation using the distributed point source method (DPSM) and moving particle semi-implicit (MPS) method, both of which do not require a calculation grid or mesh. In this paper, the numerical treatment of the viscoacoustic torque, which emerges from the viscous boundary layer and governs the acoustical droplet rotation, is discussed. The Reynolds stress traction force is calculated from the DPSM result using the idea of effective normal particle velocity through the boundary layer and input to the MPS surface particles. A droplet levitated in an acoustic chamber is simulated using the proposed calculation method. The droplet is vertically supported by a plane standing wave from an ultrasonic driver and subjected to a rotating sound field excited by two acoustic sources on the side wall with different phases. The rotation of the droplet is successfully reproduced numerically and its acceleration is discussed and compared with those in the literature.
Hütter, Markus; Brader, Joseph M
2009-06-07
We examine the origins of nonlocality in a nonisothermal hydrodynamic formulation of a one-component fluid of particles that exhibit long-range correlations, e.g., due to a spherically symmetric, long-range interaction potential. In order to furnish the continuum modeling with physical understanding of the microscopic interactions and dynamics, we make use of systematic coarse graining from the microscopic to the continuum level. We thus arrive at a thermodynamically admissible and closed set of evolution equations for the densities of momentum, mass, and internal energy. From the consideration of an illustrative special case, the following main conclusions emerge. There are two different source terms in the momentum balance. The first is a body force, which in special circumstances can be related to the functional derivative of a nonlocal Helmholtz free energy density with respect to the mass density. The second source term is proportional to the temperature gradient, multiplied by the nonlocal entropy density. These two source terms combine into a pressure gradient only in the absence of long-range effects. In the irreversible contributions to the time evolution, the nonlocal contributions arise since the self-correlations of the stress tensor and heat flux, respectively, are nonlocal as a result of the microscopic nonlocal correlations. Finally, we point out specific points that warrant further discussions.
NASA Astrophysics Data System (ADS)
Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying
2017-02-01
Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.
Numerical modeling of subsurface communication, revision 1
NASA Astrophysics Data System (ADS)
Burke, G. J.; Dease, C. G.; Didwall, E. M.; Lytle, R. J.
1985-08-01
Techniques are described for numerical modeling of through-the-Earth communication. The basic problem considered is evaluation of the field at a surface or airborne station due to an antenna buried in the earth. Equations are given for the field of a point source in a homogeneous or stratified Earth. These expressions involve infinite integrals over wave number, sometimes known as Sommerfeld integrals. Numerical techniques used for evaluating these integrals are outlined. The problem of determining the current on a real antenna in the Earth, including the effect of insulation, is considered. Results are included for the fields of a point source in homogeneous and stratified earths and the field of a finite insulated dipole. The results are for electromagnetic propagation in the ELF-VLF range, but the codes also can address propagation problems at higher frequencies.
Liu, Feng; Zhang, Shunan; Luo, Pei; Zhuang, Xuliang; Chen, Xiang; Wu, Jinshui
2018-01-01
In this review, the applications of Myriophyllum-based integrative biotechnology to remove common non-point source (NPS) pollutants, such as nitrogen, phosphorus, heavy metals, and organic pollutants (e.g., pesticides and antibiotics) are summarized. The removal of these pollutants via various mechanisms, including uptake by plant and microbial communities in macrophyte-based treatment systems are discussed. This review highlights the potential use of Myriophyllum biomass to produce animal feed, fertilizer, and other valuable by-products, which can yield cost-effective returns and attract more attention to the regulation and recycling of NPS pollutants. In addition, it demonstrates that utilization of Myriophyllum species is a promising and reliable strategy for wastewater treatment. The future development of sustainable Myriophyllum-based treatment systems is discussed from various perspectives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Suo, An-ning; Wang, Tian-ming; Wang, Hui; Yu, Bo; Ge, Jian-ping
2006-12-01
Non-point sources pollution is one of main pollution modes which pollutes the earth surface environment. Aimed at soil water loss (a typical non-point sources pollution problem) on the Losses Plateau in China, the paper applied a landscape patternevaluation method to twelve watersheds of Jinghe River Basin on the Loess Plateau by means of location-weighted landscape contrast index(LCI) and landscape slope index(LSI). The result showed that LSI of farm land, low density grass land, forest land and LCI responded significantly to soil erosion modulus and responded to depth of runoff, while the relationship between these landscape index and runoff variation index and erosion variation index were not statistically significant. This tell us LSI and LWLCI are good indicators of soil water loss and thus have big potential in non-point source pollution risk evaluation.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate
NASA Astrophysics Data System (ADS)
Lin, Ruixing; Xu, Lin; Zheng, Xian
2018-03-01
Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.
Wise, Daniel R.; Rinella, Frank A.; Rinella, Joseph F.; Fuhrer, Greg J.; Embrey, Sandra S.; Clark, Gregory M.; Schwarz, Gregory E.; Sobieszczyk, Steven
2007-01-01
This study focused on three areas that might be of interest to water-quality managers in the Pacific Northwest: (1) annual loads of total nitrogen (TN), total phosphorus (TP) and suspended sediment (SS) transported through the Columbia River and Puget Sound Basins, (2) annual yields of TN, TP, and SS relative to differences in landscape and climatic conditions between subbasin catchments (drainage basins), and (3) trends in TN, TP, and SS concentrations and loads in comparison to changes in landscape and climatic conditions in the catchments. During water year 2000, an average streamflow year in the Pacific Northwest, the Columbia River discharged about 570,000 pounds per day of TN, about 55,000 pounds per day of TP, and about 14,000 tons per day of SS to the Pacific Ocean. The Snake, Yakima, Deschutes, and Willamette Rivers contributed most of the load discharged to the Columbia River. Point-source nutrient loads to the catchments (almost exclusively from municipal wastewater treatment plants) generally were a small percentage of the total in-stream nutrient loads; however, in some reaches of the Spokane, Boise, Walla Walla, and Willamette River Basins, point sources were responsible for much of the annual in-stream nutrient load. Point-source nutrient loads generally were a small percentage of the total catchment nutrient loads compared to nonpoint sources, except for a few catchments where point-source loads comprised as much as 30 percent of the TN load and as much as 80 percent of the TP load. The annual TN and TP loads from point sources discharging directly to the Puget Sound were about equal to the annual loads from eight major tributaries. Yields of TN, TP, and SS generally were greater in catchments west of the Cascade Range. A multiple linear regression analysis showed that TN yields were significantly (p < 0.05) and positively related to precipitation, atmospheric nitrogen load, fertilizer and manure load, and point-source load, and were negatively related to average slope. TP yields were significantly related positively to precipitation, and point-source load and SS yields were significantly related positively to precipitation. Forty-eight percent of the available monitoring sites for TN had significant trends in concentration (2 increasing, 19 decreasing), 32 percent of the available sites for TP had significant trends in concentration (7 increasing, 9 decreasing), and 40 percent of the available sites for SS had significant trends in concentration (4 increasing, 15 decreasing). The trends in load followed a similar pattern, but with fewer sites showing significant trends. The results from this study indicate that inputs from nonpoint sources of nutrients probably have decreased over time in many of the catchments. Despite the generally small contribution of point-source nutrient loads, they still may have been partially responsible for the significant decreasing trends for nutrients at sites where the total point-source nutrient loads to the catchments equaled a substantial proportion of the in-stream load.
Effects of Environmental Toxicants on Metabolic Activity of Natural Microbial Communities
Barnhart, Carole L. H.; Vestal, J. Robie
1983-01-01
Two methods of measuring microbial activity were used to study the effects of toxicants on natural microbial communities. The methods were compared for suitability for toxicity testing, sensitivity, and adaptability to field applications. This study included measurements of the incorporation of 14C-labeled acetate into microbial lipids and microbial glucosidase activity. Activities were measured per unit biomass, determined as lipid phosphate. The effects of various organic and inorganic toxicants on various natural microbial communities were studied. Both methods were useful in detecting toxicity, and their comparative sensitivities varied with the system studied. In one system, the methods showed approximately the same sensitivities in testing the effects of metals, but the acetate incorporation method was more sensitive in detecting the toxicity of organic compounds. The incorporation method was used to study the effects of a point source of pollution on the microbiota of a receiving stream. Toxic doses were found to be two orders of magnitude higher in sediments than in water taken from the same site, indicating chelation or adsorption of the toxicant by the sediment. The microbiota taken from below a point source outfall was 2 to 100 times more resistant to the toxicants tested than was that taken from above the outfall. Downstream filtrates in most cases had an inhibitory effect on the natural microbiota taken from above the pollution source. The microbial methods were compared with commonly used bioassay methods, using higher organisms, and were found to be similar in ability to detect comparative toxicities of compounds, but were less sensitive than methods which use standard media because of the influences of environmental factors. PMID:16346432
NASA Astrophysics Data System (ADS)
Cenedese, C.
2014-12-01
Idealized laboratory experiments investigate the glacier-ocean boundary dynamics near a vertical 'glacier' (i.e. no floating ice tongue) in a two-layer stratified fluid, similar to Sermilik Fjord where Helheim Glacier terminates. In summer, the discharge of surface runoff at the base of the glacier (subglacial discharge) intensifies the circulation near the glacier and increases the melt rate with respect to that in winter. In the laboratory, the effect of subglacial discharge is simulated by introducing fresh water at melting temperatures from either point or line sources at the base of an ice block representing the glacier. The circulation pattern observed both with and without subglacial discharge resembles those observed in previous studies. The buoyant plume of cold meltwater and subglacial discharge water entrains ambient water and rises vertically until it finds either the interface between the two layers or the free surface. The results suggest that the meltwater deposits within the interior of the water column and not entirely at the free surface, as confirmed by field observations. The submarine melt rate increases with the subglacial discharge rate. Furthermore, the same subglacial discharge causes greater submarine melting if it exits from a point source rather than from a line source. When the subglacial discharge exits from two point sources, two buoyant plumes are formed which rise vertically and interact. The results suggest that the distance between the two subglacial discharges influences the entrainment in the plumes and consequently the amount of submarine melting and the final location of the meltwater within the water column. Hence, the distribution and number of sources of subglacial discharge may play an important role in glacial melt rates and fjord stratification and circulation. Support was given by NSF project OCE-113008.
Scattering and the Point Spread Function of the New Generation Space Telescope
NASA Technical Reports Server (NTRS)
Schreur, Julian J.
1996-01-01
Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called the total integrated scattering (TIS), and the fraction remaining is called the Strehl ratio. The angular distribution of the scattered light is called the angle resolved scattering (ARS), and it shows a strong spike centered on a scattering angle of zero, and a broad , less intense distribution at larger angles. It is this scattered light, and its effect on the point spread function which is the focus of this study.
Lines, Gregory C.
1985-01-01
The ground-water system was studied in the Trail Mountain area in order to provide hydrologic information needed to assess the hydrologic effects of underground coal mining. Well testing and spring data indicate that water occurs in several aquifers. The coal-bearing Blackhawk-Star Point aquifer is regional in nature and is the source of most water in underground mines in the region. One or more perched aquifers overlie the Blackhawk-Star Point aquifer in most areas of Trail Mountain.Aquifer tests indicate that the transmissivity of the Blackhawk-Star Point aquifer, which consists mainly of sandstone, siltstone, and shale, ranges from about 20 to 200 feet squared per day in most areas of Trail Mountain. The specific yield of the aquifer was estimated at 0.05, and the storage coefficient is about IxlO"6 per foot of aquifer where confined.The main sources of recharge to the multiaquifer system are snowmelt and rain, and water is discharged mainly by springs and by leakage along streams. Springs that issue from perched aquifers are sources of water for livestock and wildlife on Trail Mountain.Water in all aquifers is suitable for most uses. Dissolved solids concentrations range from about 250 to 700 milligrams per liter, and the predominant dissolved constituents generally are calcium, magnesium, and bicarbonate. Future underground coal mines will require dewatering when they penetrate the Blackhawk-Star Point aquifer. A finitedifference, three-dimensional computer model was used to estimate the inflow of water to various lengths and widths of a hypothetical dewatered mine and to estimate drawdowns of potentiometric surfaces in the partly dewatered aquifer. The estimates were made for a range of aquifer properties and premining hydraulic gradients that were similar to those on Trail Mountain. The computer simulations indicate that mine inflows could be several hundred gallons per minute and that potentiometric surfaces of the partly dewatered aquifer could be drawn down by several hundred feet during a reasonable life span of a mine. Because the Blackhawk-Star Point aquifer is separated from overlying perched aquifers by an unsaturated zone, mine dewatering alone would not affect perched aquifers. Mine dewatering would not significantly change water quality in the Blackhawk-Star Point aquifer. Subsidence will occur above future underground mines, but the effects on the ground-water system cannot be quantified. Subsidence fractures possibly could extend from the roof of a mine into a perched aquifer several hundred feet above. Such fractures would increase down ward percolation of water through the perching bed, and spring discharge from the perched aquifer could decrease. Flow through subsidence fractures also could increase recharge to the Blackhawk-Star Point aquifer and increase inflows to underground mines.
NASA Astrophysics Data System (ADS)
Messier, K. P.; Kane, E.; Bolich, R.; Serre, M. L.
2014-12-01
Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. Legacy contamination, or past releases of NO3-, is thought to be impacting current groundwater and surface water of North Carolina. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure known as constrained forward nonlinear regression and hyperparameter optimization (CFN-RHO) is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is then used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. The major finding regarding legacy sources NO3- in this study is that the LUR-BME models show the geographical extent of low-level contamination of deeper drinking-water aquifers is beyond that of the shallower monitoring well. Groundwater NO3- in monitoring wells is highly variable with many areas predicted above the current Environmental Protection Agency standard of 10 mg/L. Contrarily, the private well results depict widespread, low-level NO3-concentrations. This evidence supports that in addition to downward transport, there is also a significant outward transport of groundwater NO3- in the drinking water aquifer to areas outside the range of sources. Results indicate that the deeper aquifers are potentially acting as a reservoir that is not only deeper, but also covers a larger geographical area, than the reservoir formed by the shallow aquifers. Results are of interest to agencies that regulate surface water and drinking water sources impacted by the effects of legacy NO3- sources. Additionally, the results can provide guidance on factors affecting the point-level variability of groundwater NO3- and areas where monitoring is needed to reduce uncertainty. Lastly, LUR-BME predictions can be integrated into surface water models for more accurate management of non-point sources of nitrogen.
Young and Old X-ray Binary and IXO Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Ptak, A.; Strickland, D.; Weaver, K.
2003-03-01
We have analyzed Chandra ACIS observations of 32 nearby spiral and elliptical galaxies and present the results of 1441 X-ray point sources, which are presumed to be mostly X-ray binaries (XRBs) and Intermediate-luminosity X-ray Objects (IXOs, a.k.a. ULXs). The X-ray luminosity functions (XLFs) of the point sources show that the slope of the elliptical galaxy XLFs are significantly steeper than the spiral galaxy XLFs, indicating grossly different types of point sources, or different stages in their evolution. Since the spiral galaxy XLF is so shallow, the most luminous points sources (usually the IXOs) dominate the total X-ray point source luminosity LXP. We show that the galaxy total B-band and K-band light (proxies for the stellar mass) are well correlated with LXP for both spirals and ellipticals, but the FIR and UV emission is only correlated for the spirals. We deconvolve LXP into two components, one that is proportional to the galaxy stellar mass (pop II), and another that is proportional to the galaxy SFR (pop I). We also note that IXOs (and nearly all of the other point sources) in both spirals and ellipticals have X-ray colors that are most consistent with power-law slopes of Gamma ˜ 1.5--3.0, which is inconsistent with high-mass XRBS (HMXBs). Thus, HMXBs are not important contributors to LXP. We have also found that IXOs in spiral galaxies may have a slightly harder X-ray spectrum than those in elliptical galaxies. The implications of these findings will be discussed.
Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States
Puckett, Larry J.
1994-01-01
Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
NASA Technical Reports Server (NTRS)
Ku, Jentung; Paiva, Kleber; Mantelli, Marcia
2011-01-01
The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly done by cold biasing the reservoir and using electrical heaters to provide the required control power. With this method, the loop operating temperature can be controlled within 0.5K or better. However, because the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if the LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP was carried out to investigate the effects on the LHP operation when the control temperature sensor was placed on the heat source instead of the reservoir. In these tests, the LHP reservoir was cold-biased and was heated by a control heater. Test results show that it was feasible to use the heat source temperature for feedback control of the LHP operation. In particular, when a thermoelectric converter was used as the reservoir control heater, the heat source temperature could be maintained within a tight range using a proportional-integral-derivative or on/off control algorithm. Moreover, because the TEC could provide both heating and cooling to the reservoir, temperature oscillations during fast transients such as loop startup could be eliminated or substantially reduced when compared to using an electrical heater as the control heater.
Cortical reinstatement and the confidence and accuracy of source memory.
Thakral, Preston P; Wang, Tracy H; Rugg, Michael D
2015-04-01
Cortical reinstatement refers to the overlap between neural activity elicited during the encoding and the subsequent retrieval of an episode, and is held to reflect retrieved mnemonic content. Previous findings have demonstrated that reinstatement effects reflect the quality of retrieved episodic information as this is operationalized by the accuracy of source memory judgments. The present functional magnetic resonance imaging (fMRI) study investigated whether reinstatement-related activity also co-varies with the confidence of accurate source judgments. Participants studied pictures of objects along with their visual or spoken names. At test, they first discriminated between studied and unstudied pictures and then, for each picture judged as studied, they also judged whether it had been paired with a visual or auditory name, using a three-point confidence scale. Accuracy of source memory judgments- and hence the quality of the source-specifying information--was greater for high than for low confidence judgments. Modality-selective retrieval-related activity (reinstatement effects) also co-varied with the confidence of the corresponding source memory judgment. The findings indicate that the quality of the information supporting accurate judgments of source memory is indexed by the relative magnitude of content-selective, retrieval-related neural activity. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Z; Folkerts, M; Jiang, S
Purpose: We have previously developed a GPU-OpenCL-based MC dose engine named goMC with built-in analytical linac beam model. To move goMC towards routine clinical use, we have developed an automatic beam-commissioning method, and an efficient source sampling strategy to facilitate dose calculations for real treatment plans. Methods: Our commissioning method is to automatically adjust the relative weights among the sub-sources, through an optimization process minimizing the discrepancies between calculated dose and measurements. Six models built for Varian Truebeam linac photon beams (6MV, 10MV, 15MV, 18MV, 6MVFFF, 10MVFFF) were commissioned using measurement data acquired at our institution. To facilitate dose calculationsmore » for real treatment plans, we employed inverse sampling method to efficiently incorporate MLC leaf-sequencing into source sampling. Specifically, instead of sampling source particles control-point by control-point and rejecting the particles blocked by MLC, we assigned a control-point index to each sampled source particle, according to MLC leaf-open duration of each control-point at the pixel where the particle intersects the iso-center plane. Results: Our auto-commissioning method decreased distance-to-agreement (DTA) of depth dose at build-up regions by 36.2% averagely, making it within 1mm. Lateral profiles were better matched for all beams, with biggest improvement found at 15MV for which root-mean-square difference was reduced from 1.44% to 0.50%. Maximum differences of output factors were reduced to less than 0.7% for all beams, with largest decrease being from1.70% to 0.37% found at 10FFF. Our new sampling strategy was tested on a Head&Neck VMAT patient case. Achieving clinically acceptable accuracy, the new strategy could reduce the required history number by a factor of ∼2.8 given a statistical uncertainty level and hence achieve a similar speed-up factor. Conclusion: Our studies have demonstrated the feasibility and effectiveness of our auto-commissioning approach and new efficient source sampling strategy, implying the potential of our GPU-based MC dose engine goMC for routine clinical use.« less
32 CFR 806.20 - Records of non-U.S. government source.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a...-mail address of: their own FOIA office point of contact; the Air Force record OPR point of contact, the... 32 National Defense 6 2010-07-01 2010-07-01 false Records of non-U.S. government source. 806.20...
32 CFR 806.20 - Records of non-U.S. government source.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a...-mail address of: their own FOIA office point of contact; the Air Force record OPR point of contact, the... 32 National Defense 6 2011-07-01 2011-07-01 false Records of non-U.S. government source. 806.20...
32 CFR 806.20 - Records of non-U.S. government source.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a...-mail address of: their own FOIA office point of contact; the Air Force record OPR point of contact, the... 32 National Defense 6 2013-07-01 2013-07-01 false Records of non-U.S. government source. 806.20...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 5 Table 5 to Subpart III of Part 63—Compliance Requirements for Molded and Rebond Foam Production Affected Sources Emission point Emission point... Rebond Foam Production Affected Sources 5 Table 5 to Subpart III of Part 63 Protection of Environment...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 5 Table 5 to Subpart III of Part 63—Compliance Requirements for Molded and Rebond Foam Production Affected Sources Emission point Emission point... Rebond Foam Production Affected Sources 5 Table 5 to Subpart III of Part 63 Protection of Environment...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 5 Table 5 to Subpart III of Part 63—Compliance Requirements for Molded and Rebond Foam Production Affected Sources Emission point Emission point... Rebond Foam Production Affected Sources 5 Table 5 to Subpart III of Part 63 Protection of Environment...
Ramos, Guadalupe E; Lopez, Martin H; Flores, Antonio M; Figueroa, Guadalupe T; De Leon, Fernando G
2010-01-01
Xochimilco is an area of Mexico City fulfilling important ecological functions. However, the water of the canal network in the lacustrine zone of Xochimilco is supplied by the water treatment plants of the city, implying a risk of accumulated contaminants in the sediments. This study reports the effect of lixiviates obtained from sediments collected in the canals of Xochimilco on the growth of the alga Selenastrum capricornutum and the angiosperm Origanum vulgare. Three factors were tested: (a) water source in terms of the effluent from the two water treatment plants (urban waste-water, located at Cerro de la Estrella (CE) and urban-rural waters at San Luis Tlaxialtemalco (SLT); (b) sampling season (January, dry season; May and September, rainy season; and (c) distances from the water discharge point in the Xochimilco's main canal (5200 and 1000 m for CE, and 0, 200 m for SLT). The chemical water properties analyzed were: pH, electrical conductivity, N-NO(3), N-NH(3), N(Total), P-PO(4) and P(Total). The alga was more sensitive to the contaminants than O. vulgare, showing growth inhibition of 93-100%. The effect of sampling season on the inhibition of algal growth was ordered as follows: September > May > January. Lixiviates obtained from sediment samples 200 and 1000 m from the main point of water discharge caused a higher algal growth inhibition than the samples obtained at the source point. Lixiviate promoted the growth of seedlings of O. vulgare.
Hu, Zhiyong; Liebens, Johan; Rao, K Ranga
2008-01-01
Background Relatively few studies have examined the association between air pollution and stroke mortality. Inconsistent and inclusive results from existing studies on air pollution and stroke justify the need to continue to investigate the linkage between stroke and air pollution. No studies have been done to investigate the association between stroke and greenness. The objective of this study was to examine if there is association of stroke with air pollution, income and greenness in northwest Florida. Results Our study used an ecological geographical approach and dasymetric mapping technique. We adopted a Bayesian hierarchical model with a convolution prior considering five census tract specific covariates. A 95% credible set which defines an interval having a 0.95 posterior probability of containing the parameter for each covariate was calculated from Markov Chain Monte Carlo simulations. The 95% credible sets are (-0.286, -0.097) for household income, (0.034, 0.144) for traffic air pollution effect, (0.419, 1.495) for emission density of monitored point source polluters, (0.413, 1.522) for simple point density of point source polluters without emission data, and (-0.289,-0.031) for greenness. Household income and greenness show negative effects (the posterior densities primarily cover negative values). Air pollution covariates have positive effects (the 95% credible sets cover positive values). Conclusion High risk of stroke mortality was found in areas with low income level, high air pollution level, and low level of exposure to green space. PMID:18452609
Imhoff, Roland; Lamberty, Pia; Klein, Olivier
2018-04-01
Classical theories of attitude change point to the positive effect of source expertise on perceived source credibility persuasion, but there is an ongoing societal debate on the increase in anti-elitist sentiments and conspiracy theories regarding the allegedly untrustworthy power elite. In one correlational ( N = 275) and three experimental studies ( N = 195, N = 464, N = 225), we tested the novel idea that people who endorse a conspiratorial mind-set (conspiracy mentality) indeed exhibit markedly different reactions to cues of epistemic authoritativeness than those who do not: Whereas the perceived credibility of powerful sources decreased with the recipients' conspiracy mentality, that of powerless sources increased independent of and incremental to other biases, such as the need to see the ingroup in particularly positive light. The discussion raises the question whether a certain extent of source-based bias is necessary for the social fabric of a highly complex society.
Free Electron coherent sources: From microwave to X-rays
NASA Astrophysics Data System (ADS)
Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio
2018-04-01
The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Bassistelli, E. S.; Bond, J. R.; Brown, B.;
2010-01-01
We present a measurement of the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz. The measurement uses maps with 1.4' angular resolution made with data from the Atacama Cosmology Telescope (ACT). The observations cover 228 deg(sup 2) of the southern sky, in a 4 deg. 2-wide strip centered on declination 53 deg. South. The CMB at arc minute angular scales is particularly sensitive to the Silk damping scale, to the Sunyaev-Zel'dovich (SZ) effect from galaxy dusters, and to emission by radio sources and dusty galaxies. After masking the 108 brightest point sources in our maps, we estimate the power spectrum between 600 less than l less than 8000 using the adaptive multi-taper method to minimize spectral leakage and maximize use of the full data set. Our absolute calibration is based on observations of Uranus. To verify the calibration and test the fidelity of our map at large angular scales, we cross-correlate the ACT map to the WMAP map and recover the WMAP power spectrum from 250 less than l less than 1150. The power beyond the Silk damping tail of the CMB (l approximately 5000) is consistent with models of the emission from point sources. We quantify the contribution of SZ clusters to the power spectrum by fitting to a model normalized to sigma 8 = 0.8. We constrain the model's amplitude A(sub sz) less than 1.63 (95% CL). If interpreted as a measurement of as, this implies sigma (sup SZ) (sub 8) less than 0.86 (95% CL) given our SZ model. A fit of ACT and WMAP five-year data jointly to a 6-parameter ACDM model plus point sources and the SZ effect is consistent with these results.
FIRST-ORDER COSMOLOGICAL PERTURBATIONS ENGENDERED BY POINT-LIKE MASSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eingorn, Maxim, E-mail: maxim.eingorn@gmail.com
2016-07-10
In the framework of the concordance cosmological model, the first-order scalar and vector perturbations of the homogeneous background are derived in the weak gravitational field limit without any supplementary approximations. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The expressions found for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge at all points except at the locations of the sources. The average values of these metric corrections are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonianmore » cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant, this part represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggested connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.« less
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2014-06-30
In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.
Aryal, P; Molloy, J
2012-06-01
To show the effect of gold backing on dose rates for the USC #9 radioactive eye plaque. An I125 source (IsoAid model IAI-125A) and gold backing was modeled using MCNP5 Monte Carlo code. A single iodine seed was simulated with and without gold backing. Dose rates were calculated in two orthogonal planes. Dose calculation points were structured in two orthogonal planes that bisect the center of the source. A 2×2 cm matrix of spherical points of radius 0.2 mm was created in a water phantom of 10 cm radius. 0.2 billion particle histories were tracked. Dose differences with and without the gold backing were analyzed using Matlab. The gold backing produced a 3% increase in the dose rate near the source surface (<1mm) relative to that without the backing. This was presumably caused by fluorescent photons from the gold. At distances between 1 and 2 cm, the gold backing reduced the dose rate by up to 12%, which we attribute to a lack of scatter resulting from the attenuation from the gold. Dose differences were most pronounced in the radial direction near the source center but off axis. The dose decreased by 25%, 65% and 81% at 1, 2, and 3 mm off axis at a distance of 1 mm from the source surface. These effects were less pronounced in the perpendicular dimension near the source tip, where maximum dose decreases of 2% were noted. I 125 sources embedded directly into gold troughs display dose differences of 2 - 90%, relative to doses without the gold backing. This is relevant for certain types of plaques used in treatment of ocular melanoma. Large dose reductions can be observed and may have implications for scleral dose reduction. © 2012 American Association of Physicists in Medicine.
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
Photon absorption potential coefficient as a tool for materials engineering
NASA Astrophysics Data System (ADS)
Akande, Raphael Oluwole; Oyewande, Emmanuel Oluwole
2016-09-01
Different atoms achieve ionizations at different energies. Therefore, atoms are characterized by different responses to photon absorption in this study. That means there exists a coefficient for their potential for photon absorption from a photon source. In this study, we consider the manner in which molecular constituents (atoms) absorb photon from a photon source. We observe that there seems to be a common pattern of variation in the absorption of photon among the electrons in all atoms on the periodic table. We assume that the electrons closest to the nucleus (En) and the electrons closest to the outside of the atom (Eo) do not have as much potential for photon absorption as the electrons at the middle of the atom (Em). The explanation we give to this effect is that the En electrons are embedded within the nuclear influence, and similarly, Eo electrons are embedded within the influence of energies outside the atom that there exists a low potential for photon absorption for them. Unlike En and Eo, Em electrons are conditioned, such that there is a quest for balance between being influenced either by the nuclear force or forces external to the atom. Therefore, there exists a higher potential for photon absorption for Em electrons than for En and Eo electrons. The results of our derivations and analysis always produce a bell-shaped curve, instead of an increasing curve as in the ionization energies, for all elements in the periodic table. We obtained a huge data of PAPC for each of the several materials considered. The point at which two or more PAPC values cross one another is termed to be a region of conflicting order of ionization, where all the atoms absorb equal portion of the photon source at the same time. At this point, a greater fraction of the photon source is pumped into the material which could lead to an explosive response from the material. In fact, an unimaginable and unreported phenomenon (in physics) could occur, when two or more PAPCs cross, and the material is able to absorb more than that the photon source could provide, at this point. These resulting effects might be of immense materials engineering applications.
Nutrient Mass Balance for the Mobile River Basin in Alabama, Georgia, and Mississippi
NASA Astrophysics Data System (ADS)
Harned, D. A.; Harvill, J. S.; McMahon, G.
2001-12-01
The source and fate of nutrients in the Mobile River drainage basin are important water-quality concerns in Alabama, Georgia, and Mississippi. Land cover in the basin is 74 percent forested, 16 percent agricultural, 2.5 percent developed, and 4 percent wetland. A nutrient mass balance calculated for 18 watersheds in the Mobile River Basin indicates that agricultural non-point nitrogen and phosphorus sources and urban non-point nitrogen sources are the most important factors associated with nutrients in the streams. Nitrogen and phosphorus inputs from atmospheric deposition, crop fertilizer, biological nitrogen fixation, animal waste, and point sources were estimated for each of the 18 drainage basins. Total basin nitrogen inputs ranged from 27 to 93 percent from atmospheric deposition (56 percent mean), 4 to 45 percent from crop fertilizer (25 percent mean), <0.01 to 31 percent from biological nitrogen fixation (8 percent mean), 2 to 14 percent from animal waste (8 percent mean), and 0.2 to 11 percent from point sources (3 percent mean). Total basin phosphorus inputs ranged from 10 to 39 percent from atmospheric deposition (26 percent mean), 7 to 51 percent from crop fertilizer (28 percent mean), 20 to 64 percent from animal waste (41 percent mean), and 0.2 to 11 percent from point sources (3 percent mean). Nutrient outputs for the watersheds were estimated by calculating instream loads and estimating nutrient uptake, or withdrawal, by crops. The difference between the total basin inputs and outputs represents nutrients that are retained or processed within the basin while moving from the point of use to the stream, or in the stream. Nitrogen output, as a percentage of the total basin nitrogen inputs, ranged from 19 to 79 percent for instream loads (35 percent mean) and from 0.01 to 32 percent for crop harvest (10 percent mean). From 53 to 87 percent (75 percent mean) of nitrogen inputs were retained within the 18 basins. Phosphorus output ranged from 9 to 29 percent for instream loads (18 percent mean) and from 0.01 to 23 percent for crop harvest (7 percent mean). The basins retained from 60 to 87 percent (74 percent mean) of phosphorous inputs. Correlation of basin nutrient output loads and concentrations with the basin inputs and correlation of output loads and concentrations with basin land use were tested using the Spearman rank test. The correlation analysis indicated that higher nitrogen concentrations in the streams are associated with urban areas and higher loads are associated with agriculture; high phosphorus output loads and concentrations are associated with agriculture. Higher nutrient loads in agricultural basins are partly an effect of basin size-- larger basins generate larger nutrient loads. Nutrient loads and concentrations showed no significant correlation to point-source inputs. Nitrogen loads were significantly (p<0.05, correlation coefficient >0.5) higher in basins with greater cropland areas. Nitrogen concentrations also increased as residential, commercial, and total urban areas increased. Phosphorus loads were positively correlated with animal-waste inputs, pasture, and total agricultural land. Phosphorus concentrations were highest in basins with the greatest amounts of row-crop agriculture.
Impacts of the Detection of Cassiopeia A Point Source.
Umeda; Nomoto; Tsuruta; Mineshige
2000-05-10
Very recently the Chandra first light observation discovered a point-like source in the Cassiopeia A supernova remnant. This detection was subsequently confirmed by the analyses of the archival data from both ROSAT and Einstein observations. Here we compare the results from these observations with the scenarios involving both black holes (BHs) and neutron stars (NSs). If this point source is a BH, we offer as a promising model a disk-corona type model with a low accretion rate in which a soft photon source at approximately 0.1 keV is Comptonized by higher energy electrons in the corona. If it is an NS, the dominant radiation observed by Chandra most likely originates from smaller, hotter regions of the stellar surface, but we argue that it is still worthwhile to compare the cooler component from the rest of the surface with cooling theories. We emphasize that the detection of this point source itself should potentially provide enormous impacts on the theories of supernova explosion, progenitor scenario, compact remnant formation, accretion to compact objects, and NS thermal evolution.
Overview of on-farm bioremediation systems to reduce the occurrence of point source contamination.
De Wilde, Tineke; Spanoghe, Pieter; Debaer, Christof; Ryckeboer, Jaak; Springael, Dirk; Jaeken, Peter
2007-02-01
Contamination of ground and surface water puts pressure on the use of pesticides. Pesticide contamination of water can often be linked to point sources rather than to diffuse sources. Examples of such point sources are areas on farms where pesticides are handled and filled into sprayers, and where sprayers are cleaned. To reduce contamination from these point sources, different kinds of bioremediation system are being researched in various member states of the EU. Bioremediation is the use of living organisms, primarily microorganisms, to degrade the environmental contaminants into less toxic forms. The systems available for biocleaning of pesticides vary according to their shape and design. Up till now, three systems have been extensively described and reported: the biobed, the Phytobac and the biofilter. Most of these constructions are excavations or different sizes of container filled with biological material. Typical overall clean-up efficiency exceeds 95%, realising even more than 99% in many cases. This paper provides an overview of the state of the art of these bioremediation systems and discusses their construction, efficiency and drawbacks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... POINT SOURCE CATEGORY Gum Rosin and Turpentine Subcategory § 454.22 Effluent limitations and guidelines... turpentine by a point source subject to the provisions of this paragraph after application of the best...
Code of Federal Regulations, 2013 CFR
2013-07-01
... POINT SOURCE CATEGORY Gum Rosin and Turpentine Subcategory § 454.22 Effluent limitations and guidelines... turpentine by a point source subject to the provisions of this paragraph after application of the best...
Code of Federal Regulations, 2014 CFR
2014-07-01
... POINT SOURCE CATEGORY Gum Rosin and Turpentine Subcategory § 454.22 Effluent limitations and guidelines... turpentine by a point source subject to the provisions of this paragraph after application of the best...
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Norris, Jay P. (Technical Monitor)
2002-01-01
This project was awarded funding from the CGRO program to support ROSAT and ground-based observations of unidentified sources from data obtained by the EGRET instrument on the Compton Gamma-Ray Observatory. The critical items in the project are the individual ROSAT observations that are used to cover the 99% error circle of the unidentified EGRET source. Each error circle is a degree or larger in diameter. Each ROSAT field is about 30 deg in diameter. Hence, a number (>4) of ROSAT pointings must be obtained for each EGRET source to cover the field. The scheduling of ROSAT observations is carried out to maximize the efficiency of the total schedule. As a result, each pointing is broken into one or more sub-pointings of various exposure times. This project was awarded ROSAT observing time for four unidentified EGRET sources, summarized in the table. The column headings are defined as follows: 'Coverings' = number of observations to cover the error circle; 'SubPtg' = total number of sub-pointings to observe all of the coverings; 'Rec'd' = number of individual sub-pointings received to date; 'CompFlds' = number of individual coverings for which the requested complete exposure has been received. Processing of the data can not occur until a complete exposure has been accumulated for each covering.
An infrared sky model based on the IRAS point source data
NASA Technical Reports Server (NTRS)
Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah
1990-01-01
A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.
ERIC Educational Resources Information Center
Lindahl, Mikael
2005-01-01
A new approach is presented to analyze if there is a causal effect or relationship between income and measures of good health and life expectancy. One of the findings is that winning monetary lotteries could improve general health by 3 percent and decreased probability of death within five years by 2-3 percentage points. Higher income by 10…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lang, P.
The effect of Governor Jerry Brown on the solar industry in California is reviewed. It is pointed out that currently there are 7000 solar businesses; before Gov. Brown's administration there were virtually none. The effect of Gov. Brown's administration on the use of solar and renewable energy sources, as well as energy conservation are reviewed. Specific topics include: (1) political action; (2) business sense; (3) increased competition; (4) marketing; and (5) consumer protection. (MJJ)
Holtedahl, Robin; Brox, Jens Ivar; Tjomsland, Ole
2015-01-01
Objectives To analyse the impact of placebo effects on outcome in trials of selected minimally invasive procedures and to assess reported adverse events in both trial arms. Design A systematic review and meta-analysis. Data sources and study selection We searched MEDLINE and Cochrane library to identify systematic reviews of musculoskeletal, neurological and cardiac conditions published between January 2009 and January 2014 comparing selected minimally invasive with placebo (sham) procedures. We searched MEDLINE for additional randomised controlled trials published between January 2000 and January 2014. Data synthesis Effect sizes (ES) in the active and placebo arms in the trials’ primary and pooled secondary end points were calculated. Linear regression was used to analyse the association between end points in the active and sham groups. Reported adverse events in both trial arms were registered. Results We included 21 trials involving 2519 adult participants. For primary end points, there was a large clinical effect (ES≥0.8) after active treatment in 12 trials and after sham procedures in 11 trials. For secondary end points, 7 and 5 trials showed a large clinical effect. Three trials showed a moderate difference in ES between active treatment and sham on primary end points (ES ≥0.5) but no trials reported a large difference. No trials showed large or moderate differences in ES on pooled secondary end points. Regression analysis of end points in active treatment and sham arms estimated an R2 of 0.78 for primary and 0.84 for secondary end points. Adverse events after sham were in most cases minor and of short duration. Conclusions The generally small differences in ES between active treatment and sham suggest that non-specific mechanisms, including placebo, are major predictors of the observed effects. Adverse events related to sham procedures were mainly minor and short-lived. Ethical arguments frequently raised against sham-controlled trials were generally not substantiated. PMID:25636794
KM3NeT/ARCA sensitivity to point-like neutrino sources
NASA Astrophysics Data System (ADS)
Trovato, A.;
2017-09-01
KM3NeT is network of deep-sea neutrino telescopes in the Mediterranean Sea aiming at the discovery of cosmic neutrino sources (ARCA) and the determination of the neutrino mass hierarchy (ORCA). The geographical location of KM3NeT in the Northern hemisphere allows to observe most of the Galactic Plane, including the Galactic Centre. Thanks to its good angular resolution, prime targets of KM3NeT/ARCA are point-like neutrino sources and in particular galactic sources.
An efficient method to compute microlensed light curves for point sources
NASA Technical Reports Server (NTRS)
Witt, Hans J.
1993-01-01
We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.
Occurrence of THM and NDMA precursors in a watershed: Effect of seasons and anthropogenic pollution.
Aydin, Egemen; Yaman, Fatma Busra; Ates Genceli, Esra; Topuz, Emel; Erdim, Esra; Gurel, Melike; Ipek, Murat; Pehlivanoglu-Mantas, Elif
2012-06-30
In pristine watersheds, natural organic matter is the main source of disinfection by-product (DBP) precursors. However, the presence of point or non-point pollution sources in watersheds may lead to increased levels of DBP precursors which in turn form DBPs in the drinking water treatment plant upon chlorination or chloramination. In this study, water samples were collected from a lake used to obtain drinking water for Istanbul as well as its tributaries to investigate the presence of the precursors of two disinfection by-products, trihalomethanes (THM) and N-nitrosodimethylamine (NDMA). In addition, the effect of seasons and the possible relationships between these precursors and water quality parameters were evaluated. The concentrations of THM and NDMA precursors measured as total THM formation potential (TTHMFP) and NDMA formation potential (NDMAFP) ranged between 126 and 1523μg/L THM and <2 and 1648ng/L NDMA, respectively. Such wide ranges imply that some of the tributaries are affected by anthropogenic pollution sources, which is also supported by high DOC, Cl(-) and NH(3) concentrations. No significant correlation was found between the water quality parameters and DBP formation potential, except for a weak correlation between NDMAFP and DOC concentrations. The effect of the sampling location was more pronounced than the seasonal variation due to anthropogenic pollution in some tributaries and no significant correlation was obtained between the seasons and water quality parameters. Copyright © 2012 Elsevier B.V. All rights reserved.
Chandra Deep X-ray Observation of a Typical Galactic Plane Region and Near-Infrared Identification
NASA Technical Reports Server (NTRS)
Ebisawa, K.; Tsujimoto, M.; Paizis, A.; Hamaguichi, K.; Bamba, A.; Cutri, R.; Kaneda, H.; Maeda, Y.; Sato, G.; Senda, A.
2004-01-01
Using the Chandra Advanced CCD Imaging Spectrometer Imaging array (ACIS-I), we have carried out a deep hard X-ray observation of the Galactic plane region at (l,b) approx. (28.5 deg,0.0 deg), where no discrete X-ray source has been reported previously. We have detected 274 new point X-ray sources (4 sigma confidence) as well as strong Galactic diffuse emission within two partidly overlapping ACIS-I fields (approx. 250 sq arcmin in total). The point source sensitivity was approx. 3 x 10(exp -15)ergs/s/sq cm in the hard X-ray band (2-10 keV and approx. 2 x 10(exp -16) ergs/s/sq cm in the soft band (0.5-2 keV). Sum of all the detected point source fluxes account for only approx. 10 % of the total X-ray fluxes in the field of view. In order to explain the total X-ray fluxes by a superposition of fainter point sources, an extremely rapid increase of the source population is required below our sensitivity limit, which is hardly reconciled with any source distribution in the Galactic plane. Therefore, we conclude that X-ray emission from the Galactic plane has truly diffuse origin. Only 26 point sources were detected both in the soft and hard bands, indicating that there are two distinct classes of the X-ray sources distinguished by the spectral hardness ratio. Surface number density of the hard sources is only slightly higher than observed at the high Galactic latitude regions, strongly suggesting that majority of the hard X-ray sources are active galaxies seen through the Galactic plane. Following the Chandra observation, we have performed a near-infrared (NIR) survey with SOFI at ESO/NTT to identify these new X-ray sources. Since the Galactic plane is opaque in NIR, we did not see the background extragalactic sources in NIR. In fact, only 22 % of the hard sources had NIR counterparts which are most likely to be Galactic origin. Composite X-ray energy spectrum of those hard X-ray sources having NIR counterparts exhibits a narrow approx. 6.7 keV iron emission line, which is a signature of Galactic quiescent cataclysmic variables (CVs).
Effects of volcano topography on seismic broad-band waveforms
NASA Astrophysics Data System (ADS)
Neuberg, Jürgen; Pointer, Tim
2000-10-01
Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K
2015-10-15
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.
Land use change, and the implementation of best management practices to remedy the adverse effects of land use change, alter hydrologic patterns, contaminant loading and water quality in freshwater ecosystems. These changes are not constant over time, but vary in response to di...
College Students' Misconceptions of Environmental Issues Related to Global Warming.
ERIC Educational Resources Information Center
Groves, Fred H.; Pugh, Ava F.
Students are currently exposed to world environmental problems--including global warming and the greenhouse effect--in science classes at various points during their K-12 and college experience. However, the amount and depth of explosure to these issues can be quite variable. Students are also exposed to sources of misinformation leading to…
USDA-ARS?s Scientific Manuscript database
Continued public support for U.S. tax-payer funded programs aimed at reducing agricultural non-point source pollutants depends on clear demonstrations of water quality improvements. Effectiveness of structural BMPs, as well as watershed monitoring networks is an important information need to make f...
Potential water yield reduction due to forestation across China
Ge Sun; Guoyi Zhou; Zhiqiang Zhang; Xiaohua Wei; Steven G. McNulty; James M. Vose
2006-01-01
It is widely recognized that vegetation restoration will have positive effects on watershed health by reducing soil erosion and non-point source pollution, enhancing terrestrial and aquatic habitat, and increasing ecosystem carbon sequestration. However, the hydrologic consequences of forestation on degraded lands are not well studied in the forest hydrology community...
DEVELOPMENT OF LANDSCAPE INDICATORS FOR USE IN REGIONAL ECOLOGICAL RISK ASSESSMENTS
There is a growing need for cost effective ways to assess conditions of and risks to ecological resources at a variety of scales over broad regions. Indicators, models and assessment tools are needed to evaluate water bodies at risk to non-point source pollution and to be able t...
Wind Power: A Turning Point. Worldwatch Paper 45.
ERIC Educational Resources Information Center
Flavin, Christopher
Recent studies have shown wind power to be an eminently practical and potentially substantial source of electricity and direct mechanical power. Wind machines range from simple water-pumping devices made of wood and cloth to large electricity producing turbines with fiberglass blades nearly 300 feet long. Wind is in effect a form of solar…
A technique for marking first-stage larvae of the gypsy moth for dispersal studies
Thomas M. Odell; Ian H. von Lindern
1976-01-01
Zinc cadmium sulfide fluorescent particles can be used to mark first stage larvae of the gypsy moth, Porthetria dispar (L.), without effecting changes in their development and behavior. Marked larvae dispersed readily; so the technique could be used to correlate dispersed larvae with any particular source point.
One Source Training: Iowa Community Colleges Leverage Resources through Statewide Collaboration
ERIC Educational Resources Information Center
Saylor, Collette
2006-01-01
Locally governed Iowa Community Colleges are very effective at meeting the needs of local constituencies. However, this focus on local needs can hinder collaborative efforts. The Iowa Associations of Community College Trustees and Presidents determined there was a need for a single point of contact for the development and purchase of training…
Water quality and shellfish sanitation. [Patuxent and Choptank River watersheds
NASA Technical Reports Server (NTRS)
Eisenberg, M.
1978-01-01
The use of remote sensing techniques for collecting bacteriological, physical, and chemical water quality data, locating point and nonpoint sources of pollution, and developing hydrological data was found to be valuable to the Maryland program if it could be produced effectively and rapidly with a minimum amount of ground corroboration.
NASA Astrophysics Data System (ADS)
Zetterlind, Virgil E., III; Magee, Eric P.
2002-06-01
This study extends branch point tolerant phase reconstructor research to examine the effect of finite time delays and measurement error on system performance. Branch point tolerant phase reconstruction is particularly applicable to atmospheric laser weapon and communication systems, which operate in extended turbulence. We examine the relative performance of a least squares reconstructor, least squares plus hidden phase reconstructor, and a Goldstein branch point reconstructor for various correction time-delays and measurement noise scenarios. Performance is evaluated using a wave-optics simulation that models a 100km atmospheric propagation of a point source beacon to a transmit/receive aperture. Phase-only corrections are then calculated using the various reconstructor algorithms and applied to an outgoing uniform field. Point Strehl is used as the performance metric. Results indicate that while time delays and measurement noise reduce the performance of branch point tolerant reconstructors, these reconstructors can still outperform least squares implementations in many cases. We also show that branch point detection becomes the limiting factor in measurement noise corrupted scenarios.
Transient resonances in the inspirals of point particles into black holes.
Flanagan, Eanna E; Hinderer, Tanja
2012-08-17
We show that transient resonances occur in the two-body problem in general relativity for spinning black holes in close proximity to one another when one black hole is much more massive than the other. These resonances occur when the ratio of polar and radial orbital frequencies, which is slowly evolving under the influence of gravitational radiation reaction, passes through a low order rational number. At such points, the adiabatic approximation to the orbital evolution breaks down, and there is a brief but order unity correction to the inspiral rate. The resonances cause a perturbation to orbital phase of order a few tens of cycles for mass ratios ∼10(-6), make orbits more sensitive to changes in initial data (though not quite chaotic), and are genuine nonperturbative effects that are not seen at any order in a standard post-Newtonian expansion. Our results apply to an important potential source of gravitational waves, the gradual inspiral of white dwarfs, neutron stars, or black holes into much more massive black holes. Resonances' effects will increase the computational challenge of accurately modeling these sources.
NASA Astrophysics Data System (ADS)
Van Grouw, B.
2016-12-01
The Jordan River is a 51 mile long freshwater stream in Utah that provides drinking water to more than 50% of Utah's population. The various point and nonpoint sources introduce an excess of nutrients into the river. This excess induces eutrophication that results in an inhabitable environment for aquatic life is expected to be exacerbated due to climate change. Adaptive measures must be evaluated based on predictions of climate variation impacts on eutrophication and ecosystem processes in the Jordan River. A Water Quality Assessment Simulation Program (WASP) model was created to analyze the data results acquired from a Total Maximum Daily Load (TMDL) study conducted on the Jordan River. Eutrophication is modeled based on levels of phosphates and nitrates from point and nonpoint sources, temperature, and solar radiation. It will simulate the growth of phytoplankton and periphyton in the river. This model will be applied to assess how water quality in the Jordan River is affected by variations in timing and intensity of spring snowmelt and runoff during drought in the valley and the resulting effects on eutrophication in the river.
NASA Astrophysics Data System (ADS)
Jonell, T. N.; Li, Y.; Blusztajn, J.; Giosan, L.; Clift, P. D.
2017-12-01
Rare earth element (REE) radioisotope systems, such as neodymium (Nd), have been traditionally used as powerful tracers of source provenance, chemical weathering intensity, and sedimentary processes over geologic timescales. More recently, the effects of physical fractionation (hydraulic sorting) of sediments during transport have called into question the utility of Nd isotopes as a provenance tool. Is source terrane Nd provenance resolvable if sediment transport strongly induces noise? Can grain-size sorting effects be quantified? This study works to address such questions by utilizing grain size analysis, trace element geochemistry, and Nd isotope geochemistry of bulk and grain-size fractions (<63μm, 63-125 μm, 125-250 μm) from the Indus delta of Pakistan. Here we evaluate how grain size effects drive Nd isotope variability and further resolve the total uncertainties associated with Nd isotope compositions of bulk sediments. Results from the Indus delta indicate bulk sediment ɛNd compositions are most similar to the <63 µm fraction as a result of strong mineralogical control on bulk compositions by silt- to clay-sized monazite and/or allanite. Replicate analyses determine that the best reproducibility (± 0.15 ɛNd points) is observed in the 125-250 µm fraction. The bulk and finest fractions display the worst reproducibility (±0.3 ɛNd points). Standard deviations (2σ) indicate that bulk sediment uncertainties are no more than ±1.0 ɛNd points. This argues that excursions of ≥1.0 ɛNd points in any bulk Indus delta sediments must in part reflect an external shift in provenance irrespective of sample composition, grain size, and grain size distribution. Sample standard deviations (2s) estimate that any terrigenous bulk sediment composition should vary no greater than ±1.1 ɛNd points if provenance remains constant. Findings from this study indicate that although there are grain-size dependent Nd isotope effects, they are minimal in the Indus delta such that resolvable provenance-driven trends can be identified in bulk sediment ɛNd compositions over the last 20 k.y., and that overall provenance trends remain consistent with previous findings.