Ouwehand, Kim; van Gog, Tamara; Paas, Fred
2016-10-01
Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.
NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
- Many of the nation's rivers, lakes, and estuaries are impaired with fecal indicator bacteria. - Fecal contamination from point and non-point sources is responsible for the presence of fecal pathogens in source and recreational waters - Effective compliance with TMDL regulatio...
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced
Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad
2016-06-11
It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .
A program to calculate pulse transmission responses through transversely isotropic media
NASA Astrophysics Data System (ADS)
Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei
2018-05-01
We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.
THE POPULATION OF COMPACT RADIO SOURCES IN THE ORION NEBULA CLUSTER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forbrich, J.; Meingast, S.; Rivilla, V. M.
We present a deep centimeter-wavelength catalog of the Orion Nebula Cluster (ONC), based on a 30 hr single-pointing observation with the Karl G. Jansky Very Large Array in its high-resolution A-configuration using two 1 GHz bands centered at 4.7 and 7.3 GHz. A total of 556 compact sources were detected in a map with a nominal rms noise of 3 μ Jy bm{sup −1}, limited by complex source structure and the primary beam response. Compared to previous catalogs, our detections increase the sample of known compact radio sources in the ONC by more than a factor of seven. The newmore » data show complex emission on a wide range of spatial scales. Following a preliminary correction for the wideband primary-beam response, we determine radio spectral indices for 170 sources whose index uncertainties are less than ±0.5. We compare the radio to the X-ray and near-infrared point-source populations, noting similarities and differences.« less
NASA Technical Reports Server (NTRS)
Rimskiy-Korsakov, A. V.; Belousov, Y. I.
1973-01-01
A program was compiled for calculating acoustical pressure levels, which might be created by vibrations of complex structures (an assembly of shells and rods), under the influence of a given force, for cases when these fields cannot be measured directly. The acoustical field is determined according to transition frequency and pulse characteristics of the structure in the projection mode. Projection characteristics are equal to the reception characteristics, for vibrating systems in which the reciprocity principle holds true. Characteristics in the receiving mode are calculated on the basis of experimental data on a point pulse space velocity source (input signal) and vibration response of the structure (output signal). The space velocity of a pulse source, set at a point in space r, where it is necessary to calculate the sound field of the structure p(r,t), is determined by measurements of acoustic pressure, created by a point source at a distance R. The vibration response is measured at the point where the forces F and f exciting the system should act.
CADDIS Volume 2. Sources, Stressors and Responses: Metals - Point Sources from Industry
Introduction to the metals module, when to list metals as a candidate cause, ways to measure metals, simple and detailed conceptual diagrams for metals, metals module references and literature reviews.
NASA Astrophysics Data System (ADS)
Borisov, A. A.; Deryabina, N. A.; Markovskij, D. V.
2017-12-01
Instant power is a key parameter of the ITER. Its monitoring with an accuracy of a few percent is an urgent and challenging aspect of neutron diagnostics. In a series of works published in Problems of Atomic Science and Technology, Series: Thermonuclear Fusion under a common title, the step-by-step neutronics analysis was given to substantiate a calibration technique for the DT and DD modes of the ITER. A Gauss quadrature scheme, optimal for processing "expensive" experiments, is used for numerical integration of 235U and 238U detector responses to the point sources of 14-MeV neutrons. This approach allows controlling the integration accuracy in relation to the number of coordinate mesh points and thus minimizing the number of irradiations at the given uncertainty of the full monitor response. In the previous works, responses of the divertor and blanket monitors to the isotropic point sources of DT and DD neutrons in the plasma profile and to the models of real sources were calculated within the ITER model using the MCNP code. The neutronics analyses have allowed formulating the basic principles of calibration that are optimal for having the maximum accuracy at the minimum duration of in situ experiments at the reactor. In this work, scenarios of the preliminary and basic experimental ITER runs are suggested on the basis of those principles. It is proposed to calibrate the monitors only with DT neutrons and use correction factors to the DT mode calibration for the DD mode. It is reasonable to perform full calibration only with 235U chambers and calibrate 238U chambers by responses of the 235U chambers during reactor operation (cross-calibration). The divertor monitor can be calibrated using both direct measurement of responses at the Gauss positions of a point source and simplified techniques based on the concepts of equivalent ring sources and inverse response distributions, which will considerably reduce the amount of measurements. It is shown that the monitor based on the average responses of the horizontal and vertical neutron chambers remains spatially stable as the source moves and can be used in addition to the staff monitor at neutron fluxes in the detectors four orders of magnitude lower than on the first wall, where staff detectors are located. Owing to low background, detectors of neutron chambers do not need calibration in the reactor because it is actually determination of the absolute detector efficiency for 14-MeV neutrons, which is a routine out-of-reactor procedure.
Kreutzwiser; Gabriel
2000-01-01
/ This paper assesses the extent to which key geomorphic components, processes, and stresses have been reflected in the management of a coastal sandy barrier environment. The management policies and practices of selected agencies responsible for Long Point, a World Biosphere Reserve along Lake Erie, Canada, were evaluated for consistency with these principles of environmental management for sandy barriers: maintaining natural stresses essential to sandy barrier development and maintenance;protecting sediment sources, transfers, and storage; recognizing spatial variability and cyclicity of natural stresses, such as barrier overwash events; and accepting and planning for long-term evolutionary changes in the sandy barrier environment. Generally, management policies and practices have not respected the dynamic and sensitive environment of Long Point because of limited mandates of the agencies involved, inconsistent policies, and failure to apply or enforce existing policies. This is particularly evident with local municipalities and less so for the Canadian Wildlife Service, the federal agency responsible for managing National Wildlife Areas at the point. In the developed areas of Long Point, landward sediment transfers and sediment storage in dunes have been impacted by cottage development, shore protection, and maintenance of roads and parking lots. Additionally, agencies responsible for managing Long Point have no jurisdiction over sediment sources as far as 95 km away. Evolutionary change of sandy barriers poses the greatest challenge to environmental managers.
Passive lighting responsive three-dimensional integral imaging
NASA Astrophysics Data System (ADS)
Lou, Yimin; Hu, Juanmei
2017-11-01
A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.
ERIC Educational Resources Information Center
Heric, Matthew; Carter, Jenn
2011-01-01
Cognitive readiness (CR) and performance for operational time-critical environments are continuing points of focus for military and academic communities. In response to this need, we designed an open source interactive CR assessment application as a highly adaptive and efficient open source testing administration and analysis tool. It is capable…
Wise, Daniel R.; Rinella, Frank A.; Rinella, Joseph F.; Fuhrer, Greg J.; Embrey, Sandra S.; Clark, Gregory M.; Schwarz, Gregory E.; Sobieszczyk, Steven
2007-01-01
This study focused on three areas that might be of interest to water-quality managers in the Pacific Northwest: (1) annual loads of total nitrogen (TN), total phosphorus (TP) and suspended sediment (SS) transported through the Columbia River and Puget Sound Basins, (2) annual yields of TN, TP, and SS relative to differences in landscape and climatic conditions between subbasin catchments (drainage basins), and (3) trends in TN, TP, and SS concentrations and loads in comparison to changes in landscape and climatic conditions in the catchments. During water year 2000, an average streamflow year in the Pacific Northwest, the Columbia River discharged about 570,000 pounds per day of TN, about 55,000 pounds per day of TP, and about 14,000 tons per day of SS to the Pacific Ocean. The Snake, Yakima, Deschutes, and Willamette Rivers contributed most of the load discharged to the Columbia River. Point-source nutrient loads to the catchments (almost exclusively from municipal wastewater treatment plants) generally were a small percentage of the total in-stream nutrient loads; however, in some reaches of the Spokane, Boise, Walla Walla, and Willamette River Basins, point sources were responsible for much of the annual in-stream nutrient load. Point-source nutrient loads generally were a small percentage of the total catchment nutrient loads compared to nonpoint sources, except for a few catchments where point-source loads comprised as much as 30 percent of the TN load and as much as 80 percent of the TP load. The annual TN and TP loads from point sources discharging directly to the Puget Sound were about equal to the annual loads from eight major tributaries. Yields of TN, TP, and SS generally were greater in catchments west of the Cascade Range. A multiple linear regression analysis showed that TN yields were significantly (p < 0.05) and positively related to precipitation, atmospheric nitrogen load, fertilizer and manure load, and point-source load, and were negatively related to average slope. TP yields were significantly related positively to precipitation, and point-source load and SS yields were significantly related positively to precipitation. Forty-eight percent of the available monitoring sites for TN had significant trends in concentration (2 increasing, 19 decreasing), 32 percent of the available sites for TP had significant trends in concentration (7 increasing, 9 decreasing), and 40 percent of the available sites for SS had significant trends in concentration (4 increasing, 15 decreasing). The trends in load followed a similar pattern, but with fewer sites showing significant trends. The results from this study indicate that inputs from nonpoint sources of nutrients probably have decreased over time in many of the catchments. Despite the generally small contribution of point-source nutrient loads, they still may have been partially responsible for the significant decreasing trends for nutrients at sites where the total point-source nutrient loads to the catchments equaled a substantial proportion of the in-stream load.
Computational techniques in gamma-ray skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, D.L.
1988-12-01
Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less
Diffractive micro-optical element with nonpoint response
NASA Astrophysics Data System (ADS)
Soifer, Victor A.; Golub, Michael A.
1993-01-01
Common-use diffractive lenses have microrelief zones in the form of simple rings that provide only an optical power but do not contain any image information. They have a point-image response under point-source illumination. We must use a more complicated non-point response to focus a light beam into different light marks, letter-type images as well as for optical pattern recognition. The current presentation describes computer generation of diffractive micro- optical elements with complicated curvilinear zones of a regular piecewise-smooth structure and grey-level or staircase phase microrelief. The manufacture of non-point response elements uses the steps of phase-transfer calculation and orthogonal-scan masks generation or lithographic glass etching. Ray-tracing method is shown to be applicable in this task. Several working samples of focusing optical elements generated by computer and photolithography are presented. Using the experimental results we discuss here such applications as laser branding.
Response functions for neutron skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-02-01
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
Horowitz, A.J.; Elrick, K.A.; Smith, J.J.
2005-01-01
In cooperation with the City of Atlanta, Georgia, the US Geological Survey has designed and implemented a water-quantity and quality monitoring network that measures a variety of biological and chemical constituents in water and suspended sediment. The network consists of 20 long-term monitoring sites and is intended to assess water-quality trends in response to planned infrastructural improvements. Initial results from the network indicate that nonpoint-source contributions may be more significant than point-source contributions for selected sediment associated trace elements and nutrients. There also are indications of short-term discontinuous point-source contributions of these same constituents during baseflow.
Modeling the contribution of point sources and non-point sources to Thachin River water pollution.
Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth
2009-08-15
Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.
Zhou, Yongqiang; Jeppesen, Erik; Zhang, Yunlin; Shi, Kun; Liu, Xiaohan; Zhu, Guangwei
2016-02-01
Surface drinking water sources have been threatened globally and there have been few attempts to detect point-source contamination in these waters using chromophoric dissolved organic matter (CDOM) fluorescence. To determine the optimal wavelength derived from CDOM fluorescence as an indicator of point-source contamination in drinking waters, a combination of field campaigns in Lake Qiandao and a laboratory wastewater addition experiment was used. Parallel factor (PARAFAC) analysis identified six components, including three humic-like, two tryptophan-like, and one tyrosine-like component. All metrics showed strong correlation with wastewater addition (r(2) > 0.90, p < 0.0001). Both the field campaigns and the laboratory contamination experiment revealed that CDOM fluorescence at 275/342 nm was the most responsive wavelength to the point-source contamination in the lake. Our results suggest that pollutants in Lake Qiandao had the highest concentrations in the river mouths of upstream inflow tributaries and the single wavelength at 275/342 nm may be adapted for online or in situ fluorescence measurements as an early warning of contamination events. This study demonstrates the potential utility of CDOM fluorescence to monitor water quality in surface drinking water sources. Copyright © 2015 Elsevier Ltd. All rights reserved.
Accelerator test of the coded aperture mask technique for gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.
1982-01-01
A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.
Oxidative potential and inflammatory impacts of source apportioned ambient air pollution in Beijing.
Liu, Qingyang; Baumgartner, Jill; Zhang, Yuanxun; Liu, Yanju; Sun, Yongjun; Zhang, Meigen
2014-11-04
Air pollution exposure is associated with a range of adverse health impacts. Knowledge of the chemical components and sources of air pollution most responsible for these health effects could lead to an improved understanding of the mechanisms of such effects and more targeted risk reduction strategies. We measured daily ambient fine particulate matter (<2.5 μm in aerodynamic diameter; PM2.5) for 2 months in peri-urban and central Beijing, and assessed the contribution of its chemical components to the oxidative potential of ambient air pollution using the dithiothreitol (DTT) assay. The composition data were applied to a multivariate source apportionment model to determine the PM contributions of six sources or factors: a zinc factor, an aluminum factor, a lead point factor, a secondary source (e.g., SO4(2-), NO3(2-)), an iron source, and a soil dust source. Finally, we assessed the relationship between reactive oxygen species (ROS) activity-related PM sources and inflammatory responses in human bronchial epithelial cells. In peri-urban Beijing, the soil dust source accounted for the largest fraction (47%) of measured ROS variability. In central Beijing, a secondary source explained the greatest fraction (29%) of measured ROS variability. The ROS activities of PM collected in central Beijing were exponentially associated with in vivo inflammatory responses in epithelial cells (R2=0.65-0.89). We also observed a high correlation between three ROS-related PM sources (a lead point factor, a zinc factor, and a secondary source) and expression of an inflammatory marker (r=0.45-0.80). Our results suggest large differences in the contribution of different PM sources to ROS variability at the central versus peri-urban study sites in Beijing and that secondary sources may play an important role in PM2.5-related oxidative potential and inflammatory health impacts.
NASA Astrophysics Data System (ADS)
Dickerson, R. R.; Ren, X.; Shepson, P. B.; Salmon, O. E.; Brown, S. S.; Thornton, J. A.; Whetstone, J. R.; Salawitch, R. J.; Sahu, S.; Hall, D.; Grimes, C.; Wong, T. M.
2015-12-01
Urban areas are responsible for a major component of the anthropogenic greenhouse gas (GHG) emissions. Quantification of urban GHG fluxes is important for establishing scientifically sound and cost-effective policies for mitigating GHGs. Discrepancies between observations and model simulations of GHGs suggest uncharacterized sources in urban environments. In this work, we analyze and quantify fluxes of CO2, CH4, CO (and other trace species) from the Baltimore-Washington area based on the mass balance approach using the two-aircraft observations conducted in February-March 2015. Estimated fluxes from this area were 110,000±20,000 moles s-1 for CO2, 700±330 moles s-1 for CH4, and 535±188 moles s-1 for CO. This implies that methane is responsible for ~20% of the climate forcing from these cities. Point sources of CO2 from four regional power plants and one point source of CH4 from a landfill were identified and the emissions from these point sources were quantified based on the aircraft observation and compared to the emission inventory data. Methane fluxes from the Washington area were larger than from the Baltimore area, indicating a larger leakage rate in the Washington area. The ethane-to-methane ratios, with a mean of 3.3%, in the limited canister samples collected during the flights indicate that natural gas leaks and the upwind oil and natural gas operations are responsible for a substantial fraction of the CH4 flux. These observations will be compared to models using Ensemble Kalman Filter Assimilation techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Speciated atmospheric mercury and its potential source in Guiyang, China
NASA Astrophysics Data System (ADS)
Fu, Xuewu; Feng, Xinbin; Qiu, Guangle; Shang, Lihai; Zhang, Hui
2011-08-01
Speciated atmospheric mercury (Hg) including gaseous elemental mercury (GEM), particulate Hg (PHg), and reactive gaseous Hg (RGM) were continuously measured at an urban site in Guiyang city, southwest China from August to December 2009. The averaged concentrations for GEM, PHg, and RGM were 9.72 ± 10.2 ng m -3, 368 ± 676 pg m -3, and 35.7 ± 43.9 pg m -3, respectively, which were all highly elevated compared to observations at urban sites in Europe and North America. GEM and PHg were characterized by similar monthly and diurnal patterns, with elevated levels in cold months and nighttime, respectively. In contrast, RGM did not exhibit clear monthly and diurnal variations. The variations of GEM, PHg, and RGM indicate the sampling site was significantly impacted by sources in the city municipal area. Sources identification implied that both residential coal burning and large point sources were responsible to the elevated GEM and PHg concentrations; whereas point sources were the major contributors to elevated RGM concentrations. Point sources played a different role in regulating GEM, PHg, and RGM concentrations. Aside from residential emissions, PHg levels was mostly affected by small-scale coal combustion boilers situated to the east of the sampling site, which were scarcely equipped or lacking particulate control devices; whereas point sources situated to the east, southeast, and southwest of the sampling played an important role on the distribution of atmospheric GEM and RGM.
Infrared point sensors for homeland defense applications
NASA Astrophysics Data System (ADS)
Thomas, Ross C.; Carter, Michael T.; Homrighausen, Craig L.
2004-03-01
We report recent progress toward the development of infrared point sensors for the detection of chemical warfare agents and explosive related chemicals, which pose a significant threat to both health and environment. Technical objectives have focused on the development of polymer sorbents to enhance the infrared response of these hazardous organic compounds. For example, infrared point sensors which part-per-billion detection limits have been developed that rapidlypartition chemical warfare agents and explosive related chemicals into polymer thin films with desirable chemical and physical properties. These chemical sensors demonstrate novel routes to reversible sensing of hazardous organic compounds. The development of small, low-power, sensitive, and selective instruments employing these chemical sensors would enhance the capabilities of federal, state, and local emergency response to incidents involving chemical terrorism. Specific applications include chemical defense systems for military personnel and homeland defense, environmental monitors for remediation and demilitarization, and point source detectors for emergency and maintenance response teams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nitao, J J
The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less
NASA Astrophysics Data System (ADS)
Steenhuisen, Frits; Wilson, Simon J.
2015-07-01
Mercury is a global pollutant that poses threats to ecosystem and human health. Due to its global transport, mercury contamination is found in regions of the Earth that are remote from major emissions areas, including the Polar regions. Global anthropogenic emission inventories identify important sectors and industries responsible for emissions at a national level; however, to be useful for air transport modelling, more precise information on the locations of emission is required. This paper describes the methodology applied, and the results of work that was conducted to assign anthropogenic mercury emissions to point sources as part of geospatial mapping of the 2010 global anthropogenic mercury emissions inventory prepared by AMAP/UNEP. Major point-source emission sectors addressed in this work account for about 850 tonnes of the emissions included in the 2010 inventory. This work allocated more than 90% of these emissions to some 4600 identified point source locations, including significantly more point source locations in Africa, Asia, Australia and South America than had been identified during previous work to geospatially-distribute the 2005 global inventory. The results demonstrate the utility and the limitations of using existing, mainly public domain resources to accomplish this work. Assumptions necessary to make use of selected online resources are discussed, as are artefacts that can arise when these assumptions are applied to assign (national-sector) emissions estimates to point sources in various countries and regions. Notwithstanding the limitations of the available information, the value of this procedure over alternative methods commonly used to geo-spatially distribute emissions, such as use of 'proxy' datasets to represent emissions patterns, is illustrated. Improvements in information that would facilitate greater use of these methods in future work to assign emissions to point-sources are discussed. These include improvements to both national (geo-referenced) emission inventories and also to other resources that can be employed when such national inventories are lacking.
NASA Astrophysics Data System (ADS)
Harmon, T. C.; Rat'ko, A.; Dietrich, H.; Park, Y.; Wijsboom, Y. H.; Bendikov, M.
2008-12-01
Inorganic nitrogen (nitrate (NO3-) and ammonium (NH+)) from chemical fertilizer and livestock waste is a major source of pollution in groundwater, surface water and the air. While some sources of these chemicals, such as waste lagoons, are well-defined, their application as fertilizer has the potential to create distributed or non-point source pollution problems. Scalable nitrate sensors (small and inexpensive) would enable us to better assess non-point source pollution processes in agronomic soils, groundwater and rivers subject to non-point source inputs. This work describes the fabrication and testing of inexpensive PVC-membrane- based ion selective electrodes (ISEs) for monitoring nitrate levels in soil water environments. ISE-based sensors have the advantages of being easy to fabricate and use, but suffer several shortcomings, including limited sensitivity, poor precision, and calibration drift. However, modern materials have begun to yield more robust ISE types in laboratory settings. This work emphasizes the in situ behavior of commercial and fabricated sensors in soils subject to irrigation with dairy manure water. Results are presented in the context of deployment techniques (in situ versus soil lysimeters), temperature compensation, and uncertainty analysis. Observed temporal responses of the nitrate sensors exhibited diurnal cycling with elevated nitrate levels at night and depressed levels during the day. Conventional samples collected via lysimeters validated this response. It is concluded that while modern ISEs are not yet ready for long-term, unattended deployment, short-term installations (on the order of 2 to 4 days) are viable and may provide valuable insights into nitrogen dynamics in complex soil systems.
Response of an oscillating superleak transducer to a pointlike heat source
NASA Astrophysics Data System (ADS)
Quadt, A.; Schröder, B.; Uhrmacher, M.; Weingarten, J.; Willenberg, B.; Vennekate, H.
2012-03-01
A new technique of superconducting cavity diagnostics has been introduced by D. L. Hartill at Cornell University, Ithaca, New York. It uses oscillating superleak transducers (OST) which detect the heat transferred from a cavity’s quench point via Second Sound through the superfluid He bath, needed to cool the superconducting cavity. The localization of the quench point is done by triangulation. The observed response of an OST is a nontrivial, but reproducible pattern of oscillations. A small helium evaporation cryostat was built which allows the investigation of the response of an OST in greater detail. The distance between a pointlike electrical heater and the OST can be varied. The OST can be mounted either parallel or perpendicular to the plate that houses the heat source. If the artificial quench point releases an amount of energy compatible to a real quench spot on a cavity’s surface, the OST signal starts with a negative pulse, which is usually strong enough to allow automatic detection. Furthermore, the reflection of the Second Sound on the wall is observed. A reflection coefficient R=0.39±0.05 of the glass wall is measured. This excludes a strong influence of multiple reflections in the complex OST response. Fourier analyses show three main frequencies, found in all OST spectra. They can be interpreted as modes of an oscillating circular membrane.
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
Radicalization: An Overview and Annotated Bibliography of Open-Source Literature
2006-12-15
particularly with liberal, democratic, and humanistic Muslims Phares points to Jihadism as the main root cause of terrorism and suggests that defending...An Overview and Annotated Bibliography of Open-Source Literature 155 of ambiguity), epistemic and existential needs theory (need for closure...This book presents Terror Management Theory, which addresses behavioral and psychological responses to terrorist events. An existential
Ahlfors, Seppo P.; Jones, Stephanie R.; Ahveninen, Jyrki; Hämäläinen, Matti S.; Belliveau, John W.; Bar, Moshe
2014-01-01
Identifying inter-area communication in terms of the hierarchical organization of functional brain areas is of considerable interest in human neuroimaging. Previous studies have suggested that the direction of magneto- and electroencephalography (MEG, EEG) source currents depends on the layer-specific input patterns into a cortical area. We examined the direction in MEG source currents in a visual object recognition experiment in which there were specific expectations of activation in the fusiform region being driven by either feedforward or feedback inputs. The source for the early non-specific visual evoked response, presumably corresponding to feedforward driven activity, pointed outward, i.e., away from the white matter. In contrast, the source for the later, object-recognition related signals, expected to be driven by feedback inputs, pointed inward, toward the white matter. Associating specific features of the MEG/EEG source waveforms to feedforward and feedback inputs could provide unique information about the activation patterns within hierarchically organized cortical areas. PMID:25445356
2011 Radioactive Materials Usage Survey for Unmonitored Point Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturgeon, Richard W.
This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less
1980-07-21
body which has undergone nonuniform plastic deformation can retain a system of stresses within the body after the external forces have been removed...produced by virtually all forming operations, welding, nonuniform cooling of ingots and electroplating. The important point to be made is that the response...will be considered% (1) the losses induced in a length of fiber following nonuniform irradiation from a point source and (2) the losses induced
Point-source stochastic-method simulations of ground motions for the PEER NGA-East Project
Boore, David
2015-01-01
Ground-motions for the PEER NGA-East project were simulated using a point-source stochastic method. The simulated motions are provided for distances between of 0 and 1200 km, M from 4 to 8, and 25 ground-motion intensity measures: peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped pseudoabsolute response spectral acceleration (PSA) for 23 periods ranging from 0.01 s to 10.0 s. Tables of motions are provided for each of six attenuation models. The attenuation-model-dependent stress parameters used in the stochastic-method simulations were derived from inversion of PSA data from eight earthquakes in eastern North America.
NASA Technical Reports Server (NTRS)
Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan
2014-01-01
Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2 Deg. in azimuth and 0.17 Deg. in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02 Deg.
elevatr: Access Elevation Data from Various APIs | Science ...
Several web services are available that provide access to elevation data. This package provides access to several of those services and returns elevation data either as a SpatialPointsDataFrame from point elevation services or as a raster object from raster elevation services. Currently, the package supports access to the Mapzen Elevation Service, Mapzen Terrain Service, and the USGS Elevation Point Query Service. The R language for statistical computing is increasingly used for spatial data analysis . This R package, elevatr, is in response to this and provides access to elevation data from various sources directly in R. The impact of `elevatr` is that it will 1) facilitate spatial analysis in R by providing access to foundational dataset for many types of analyses (e.g. hydrology, limnology) 2) open up a new set of users and uses for APIs widely used outside of R, and 3) provide an excellent example federal open source development as promoted by the Federal Source Code Policy (https://sourcecode.cio.gov/).
Comparative Study of Two InGaAs-Based Reference Radiation Thermometers
NASA Astrophysics Data System (ADS)
Nasibov, H.; Diril, A.; Pehlivan, O.; Kalemci, M.
2017-07-01
More than one decade ago, an InGaAs detector-based transfer standard infrared radiation thermometer working in the temperature range from 150 {^{circ }}\\hbox {C} to 1100 {^{circ }}\\hbox {C} was built at TUBITAK UME in the scope of collaboration with IMGC (INRIM since 2006). During this timescale, the radiation thermometer was used for the dissemination of the radiation temperature scale below the silver fixed-point temperature. Recently, a new radiation thermometer with the same design but with different spectral responsivity was constructed and employed in the laboratory. In this work, we present the comparative study of these thermometers. Furthermore, the paper describes the measurement results of the thermometer's main characteristics such as the size-of-source effect, spectral responsivity, gain ratio, and linearity. Besides, both thermometers were calibrated at the freezing temperatures of indium, tin, zinc, aluminum, and copper reference fixed-point blackbodies. The main study is focused on the impact of the spectral responsivity of thermometers on the interpolation parameters of the Sakuma-Hattori equation. Furthermore, the calibration results and the uncertainty sources are discussed in this paper.
NASA Astrophysics Data System (ADS)
Eppeldauer, G. P.; Podobedov, V. B.; Cooksey, C. C.
2017-05-01
Calibration of the emitted radiation from UV sources peaking at 365 nm, is necessary to perform the ASTM required 1 mW/cm2 minimum irradiance in certain military material (ships, airplanes etc) tests. These UV "black lights" are applied for crack-recognition using fluorescent liquid penetrant inspection. At present, these nondestructive tests are performed using Hg-lamps. Lack of a proper standard and the different spectral responsivities of the available UV meters cause significant measurement errors even if the same UV-365 source is measured. A pyroelectric radiometer standard with spectrally flat (constant) response in the UV-VIS range has been developed to solve the problem. The response curve of this standard determined from spectral reflectance measurement, is converted into spectral irradiance responsivity with <0.5% (k=2) uncertainty as a result of using an absolute tie point from a Si-trap detector traceable to the primary standard cryogenic radiometer. The flat pyroelectric radiometer standard can be used to perform uniform integrated irradiance measurements from all kinds of UV sources (with different peaks and distributions) without using any source standard. Using this broadband calibration method, yearly spectral calibrations for the reference UV (LED) sources and irradiance meters is not needed. Field UV sources and meters can be calibrated against the pyroelectric radiometer standard for broadband (integrated) irradiance and integrated responsivity. Using the broadband measurement procedure, the UV measurements give uniform results with significantly decreased uncertainties.
NASA Astrophysics Data System (ADS)
Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang
2017-05-01
In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.
Numerical convergence and validation of the DIMP inverse particle transport model
Nelson, Noel; Azmy, Yousry
2017-09-01
The data integration with modeled predictions (DIMP) model is a promising inverse radiation transport method for solving the special nuclear material (SNM) holdup problem. Unlike previous methods, DIMP is a completely passive nondestructive assay technique that requires no initial assumptions regarding the source distribution or active measurement time. DIMP predicts the most probable source location and distribution through Bayesian inference and quasi-Newtonian optimization of predicted detector re-sponses (using the adjoint transport solution) with measured responses. DIMP performs well with for-ward hemispherical collimation and unshielded measurements, but several considerations are required when using narrow-view collimated detectors. DIMP converged well to themore » correct source distribution as the number of synthetic responses increased. DIMP also performed well for the first experimental validation exercise after applying a collimation factor, and sufficiently reducing the source search vol-ume's extent to prevent the optimizer from getting stuck in local minima. DIMP's simple point detector response function (DRF) is being improved to address coplanar false positive/negative responses, and an angular DRF is being considered for integration with the next version of DIMP to account for highly collimated responses. Overall, DIMP shows promise for solving the SNM holdup inverse problem, especially once an improved optimization algorithm is implemented.« less
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Nimmo, J.R.
2010-01-01
Germann's (2010) comment helpfully presents supporting evidence that I have missed, notes items that need clarification or correction, and stimulates discussion of what is needed for improved theory of unsaturated flow. Several points from this comment relate not only to specific features of the content of my paper (Nimmo, 2010), but also to the broader question of what methodology is appropriate for developing an applied earth science. Accordingly, before addressing specific points that Germann identified, I present here some considerations of purpose and background relevant to evaluation of the unsaturated flow model of Nimmo (2010).
Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis
NASA Technical Reports Server (NTRS)
Kipp, C. R.; Bernhard, R. J.
1985-01-01
A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.
Swift Burst Alert Telescope (BAT) Instrument Response
NASA Technical Reports Server (NTRS)
Parsons, A.; Hullinger, D.; Markwardt, C.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Krimm, H.; Tueller, J.; Fenimore, E.; Palmer, D.
2004-01-01
The Burst Alert Telescope (BAT), a large coded aperture instrument with a wide field-of-view (FOV), provides the gamma-ray burst triggers and locations for the Swift Gamma-Ray Burst Explorer. In addition to providing this imaging information, BAT will perform a 15 keV - 150 keV all-sky hard x-ray survey based on the serendipitous pointings resulting from the study of gamma-ray bursts and will also monitor the sky for transient hard x-ray sources. For BAT to provide spectral and photometric information for the gamma-ray bursts, the transient sources and the all-sky survey, the BAT instrument response must be determined to an increasingly greater accuracy. In this talk, we describe the BAT instrument response as determined to an accuracy suitable for gamma-ray burst studies. We will also discuss the public data analysis tools developed to calculate the BAT response to sources at different energies and locations in the FOV. The level of accuracy required for the BAT instrument response used for the hard x-ray survey is significantly higher because this response must be used in the iterative clean algorithm for finding fainter sources. Because the bright sources add a lot of coding noise to the BAT sky image, fainter sources can be seen only after the counts due to the bright sources are removed. The better we know the BAT response, the lower the noise in the cleaned spectrum and thus the more sensitive the survey. Since the BAT detector plane consists of 32768 individual, 4 mm square CZT gamma-ray detectors, the most accurate BAT response would include 32768 individual detector response functions to separate mask modulation effects from differences in detector efficiencies! We describe OUT continuing work to improve the accuracy of the BAT instrument response and will present the current results of Monte Carlo simulations as well as BAT ground calibration data.
Dust Storm over the Middle East: Retrieval Approach, Source Identification, and Trend Analysis
NASA Astrophysics Data System (ADS)
Moridnejad, A.; Karimi, N.; Ariya, P. A.
2014-12-01
The Middle East region has been considered to be responsible for approximately 25% of the Earth's global emissions of dust particles. By developing Middle East Dust Index (MEDI) and applying to 70 dust storms characterized on MODIS images and occurred during the period between 2001 and 2012, we herein present a new high resolution mapping of major atmospheric dust source points participating in this region. To assist environmental managers and decision maker in taking proper and prioritized measures, we then categorize identified sources in terms of intensity based on extracted indices for Deep Blue algorithm and also utilize frequency of occurrence approach to find the sensitive sources. In next step, by implementing the spectral mixture analysis on the Landsat TM images (1984 and 2012), a novel desertification map will be presented. The aim is to understand how human perturbations and land-use change have influenced the dust storm points in the region. Preliminary results of this study indicate for the first time that c.a., 39 % of all detected source points are located in this newly anthropogenically desertified area. A large number of low frequency sources are located within or close to the newly desertified areas. These severely desertified regions require immediate concern at a global scale. During next 6 months, further research will be performed to confirm these preliminary results.
Human disturbance alters key attributes of aquatic ecosystems such as water quality, habitat structure, hydrological regime, energy flow, and biological interactions. In great rivers, this is particularly evident because they are disproportionately degraded by habitat alteration...
Giant Dirac point shift of graphene phototransistors by doped silicon substrate current
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimatani, Masaaki; Ogawa, Shinpei, E-mail: Ogawa.Shimpei@eb.MitsubishiElectric.co.jp; Fujisawa, Daisuke
2016-03-15
Graphene is a promising new material for photodetectors due to its excellent optical properties and high-speed response. However, graphene-based phototransistors have low responsivity due to the weak light absorption of graphene. We have observed a giant Dirac point shift upon white light illumination in graphene-based phototransistors with n-doped Si substrates, but not those with p-doped substrates. The source-drain current and substrate current were investigated with and without illumination for both p-type and n-type Si substrates. The decay time of the drain-source current indicates that the Si substrate, SiO{sub 2} layer, and metal electrode comprise a metal-oxide-semiconductor (MOS) capacitor due tomore » the presence of defects at the interface between the Si substrate and SiO{sub 2} layer. The difference in the diffusion time of the intrinsic major carriers (electrons) and the photogenerated electron-hole pairs to the depletion layer delays the application of the gate voltage to the graphene channel. Therefore, the giant Dirac point shift is attributed to the n-type Si substrate current. This phenomenon can be exploited to realize high-performance graphene-based phototransistors.« less
NASA Astrophysics Data System (ADS)
Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.
2015-12-01
Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.
NASA Astrophysics Data System (ADS)
Pastuszak, Marianna; Stålnacke, Per; Pawlikowski, Krzysztof; Witek, Zbigniew
2012-06-01
The Vistula and Oder Rivers, two out of the seven largest rivers in the Baltic drainage basin, were responsible for 25% of total riverine nitrogen (TN) and 37% of total riverine phosphorus (TP) input to the Baltic Sea in 2000. The aim of this paper is to evaluate the response of these two rivers to changes that took place in Polish economy during the transition period (1988-2008). The economic changes encompassed: construction of nearly 900 waste water treatment plants in 1999-2008, modernization or closure of obsolete factories, economizing in water consumption, closure or change of ownership of State-owned farms, a drop in fertilizer application, and a decline in livestock stocking. More intensive agriculture and higher point source emissions in the Oder than in the Vistula basin resulted in higher concentrations of TN, nitrate (NO3-N), and TP in the Oder waters in the entire period of our studies. In both rivers, nutrient concentrations and loads showed significant declining trends in the period 1988-2008. TN loads decreased by ca. 20% and 25% in the Vistula and Oder; TP loads dropped by ca. 15% and 65% in the Vistula and Oder. The reduction in phosphorus loads was particularly pronounced in the Oder basin, which was characterized by efficient management systems aiming at mitigation of nutrient emission from the point sources and greater extent of structural changes in agricultural sector during the transition period. The trends in riverine loads are discussed in the paper in relation to socio-economical changes during the transition period, and with respect to physiographic features.
Measurement of Phased Array Point Spread Functions for Use with Beamforming
NASA Technical Reports Server (NTRS)
Bahr, Chris; Zawodny, Nikolas S.; Bertolucci, Brandon; Woolwine, Kyle; Liu, Fei; Li, Juan; Sheplak, Mark; Cattafesta, Louis
2011-01-01
Microphone arrays can be used to localize and estimate the strengths of acoustic sources present in a region of interest. However, the array measurement of a region, or beam map, is not an accurate representation of the acoustic field in that region. The true acoustic field is convolved with the array s sampling response, or point spread function (PSF). Many techniques exist to remove the PSF's effect on the beam map via deconvolution. Currently these methods use a theoretical estimate of the array point spread function and perhaps account for installation offsets via determination of the microphone locations. This methodology fails to account for any reflections or scattering in the measurement setup and still requires both microphone magnitude and phase calibration, as well as a separate shear layer correction in an open-jet facility. The research presented seeks to investigate direct measurement of the array's PSF using a non-intrusive acoustic point source generated by a pulsed laser system. Experimental PSFs of the array are computed for different conditions to evaluate features such as shift-invariance, shear layers and model presence. Results show that experimental measurements trend with theory with regard to source offset. The source shows expected behavior due to shear layer refraction when observed in a flow, and application of a measured PSF to NACA 0012 aeroacoustic trailing-edge noise data shows a promising alternative to a classic shear layer correction method.
Skyshine line-beam response functions for 20- to 100-MeV photons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockhoff, R.C.; Shultis, J.K.; Faw, R.E.
1996-06-01
The line-beam response function, needed for skyshine analyses based on the integral line-beam method, was evaluated with the MCNP Monte Carlo code for photon energies from 20 to 100 MeV and for source-to-detector distances out to 1,000 m. These results are compared with point-kernel results, and the effects of bremsstrahlung and positron transport in the air are found to be important in this energy range. The three-parameter empirical formula used in the integral line-beam skyshine method was fit to the MCNP results, and values of these parameters are reported for various source energies and angles.
Network traffic behaviour near phase transition point
NASA Astrophysics Data System (ADS)
Lawniczak, A. T.; Tang, X.
2006-03-01
We explore packet traffic dynamics in a data network model near phase transition point from free flow to congestion. The model of data network is an abstraction of the Network Layer of the OSI (Open Systems Interconnect) Reference Model of packet switching networks. The Network Layer is responsible for routing packets across the network from their sources to their destinations and for control of congestion in data networks. Using the model we investigate spatio-temporal packets traffic dynamics near the phase transition point for various network connection topologies, and static and adaptive routing algorithms. We present selected simulation results and analyze them.
Rounds, Stewart A.
2007-01-01
Water temperature is an important factor influencing the migration, rearing, and spawning of several important fish species in rivers of the Pacific Northwest. To protect these fish populations and to fulfill its responsibilities under the Federal Clean Water Act, the Oregon Department of Environmental Quality set a water temperature Total Maximum Daily Load (TMDL) in 2006 for the Willamette River and the lower reaches of its largest tributaries in northwestern Oregon. As a result, the thermal discharges of the largest point sources of heat to the Willamette River now are limited at certain times of the year, riparian vegetation has been targeted for restoration, and upstream dams are recognized as important influences on downstream temperatures. Many of the prescribed point-source heat-load allocations are sufficiently restrictive that management agencies may need to expend considerable resources to meet those allocations. Trading heat allocations among point-source dischargers may be a more economical and efficient means of meeting the cumulative point-source temperature limits set by the TMDL. The cumulative nature of these limits, however, precludes simple one-to-one trades of heat from one point source to another; a more detailed spatial analysis is needed. In this investigation, the flow and temperature models that formed the basis of the Willamette temperature TMDL were used to determine a spatially indexed 'heating signature' for each of the modeled point sources, and those signatures then were combined into a user-friendly, spreadsheet-based screening tool. The Willamette River Point-Source Heat-Trading Tool allows the user to increase or decrease the heating signature of each source and thereby evaluate the effects of a wide range of potential point-source heat trades. The predictions of the Trading Tool were verified by running the Willamette flow and temperature models under four different trading scenarios, and the predictions typically were accurate to within about 0.005 degrees Celsius (?C). In addition to assessing the effects of point-source heat trades, the models were used to evaluate the temperature effects of several shade-restoration scenarios. Restoration of riparian shade along the entire Long Tom River, from its mouth to Fern Ridge Dam, was calculated to have a small but significant effect on daily maximum temperatures in the main-stem Willamette River, on the order of 0.03?C where the Long Tom River enters the Willamette River, and diminishing downstream. Model scenarios also were run to assess the effects of restoring selected 5-mile reaches of riparian vegetation along the main-stem Willamette River from river mile (RM) 176.80, just upstream of the point where the McKenzie River joins the Willamette River, to RM 116.87 near Albany, which is one location where cumulative point-source heating effects are at a maximum. Restoration of riparian vegetation along the main-stem Willamette River was shown by model runs to have a significant local effect on daily maximum river temperatures (0.046 to 0.194?C) at the site of restoration. The magnitude of the cooling depends on many factors including river width, flow, time of year, and the difference in vegetation characteristics between current and restored conditions. Downstream of the restored reach, the cooling effects are complex and have a nodal nature: at one-half day of travel time downstream, shade restoration has little effect on daily maximum temperature because water passes the restoration site at night; at 1 full day of travel time downstream, cooling effects increase to a second, diminished maximum. Such spatial complexities may complicate the trading of heat allocations between point and nonpoint sources. Upstream dams have an important effect on water temperature in the Willamette River system as a result of augmented flows as well as modified temperature releases over the course of the summer and autumn. The TMDL was formulated prior t
Sources of uncertanity as a basis to fill the information gap in a response to flood
NASA Astrophysics Data System (ADS)
Kekez, Toni; Knezic, Snjezana
2016-04-01
Taking into account uncertainties in flood risk management remains a challenge due to difficulties in choosing adequate structural and/or non-structural risk management options. Despite stated measures wrong decisions are often being made when flood occurs. Parameter and structural uncertainties which include model and observation errors as well as lack of knowledge about system characteristics are the main considerations. Real time flood risk assessment methods are predominantly based on measured water level values and vulnerability as well as other relevant characteristics of flood affected area. The goal of this research is to identify sources of uncertainties and to minimize information gap between the point where the water level is measured and the affected area, taking into consideration main uncertainties that can affect risk value at the observed point or section of the river. Sources of uncertainties are identified and determined using system analysis approach and relevant uncertainties are included in the risk assessment model. With such methodological approach it is possible to increase response time with more effective risk assessment which includes uncertainty propagation model. Response phase could be better planned with adequate early warning systems resulting in more time and less costs to help affected areas and save human lives. Reliable and precise information is necessary to raise emergency operability level in order to enhance safety of citizens and reducing possible damage. The results of the EPISECC (EU funded FP7) project are used to validate potential benefits of this research in order to improve flood risk management and response methods. EPISECC aims at developing a concept of a common European Information Space for disaster response which, among other disasters, considers the floods.
TREFEX: Trend Estimation and Change Detection in the Response of MOX Gas Sensors
Pashami, Sepideh; Lilienthal, Achim J.; Schaffernicht, Erik; Trincavelli, Marco
2013-01-01
Many applications of metal oxide gas sensors can benefit from reliable algorithms to detect significant changes in the sensor response. Significant changes indicate a change in the emission modality of a distant gas source and occur due to a sudden change of concentration or exposure to a different compound. As a consequence of turbulent gas transport and the relatively slow response and recovery times of metal oxide sensors, their response in open sampling configuration exhibits strong fluctuations that interfere with the changes of interest. In this paper we introduce TREFEX, a novel change point detection algorithm, especially designed for metal oxide gas sensors in an open sampling system. TREFEX models the response of MOX sensors as a piecewise exponential signal and considers the junctions between consecutive exponentials as change points. We formulate non-linear trend filtering and change point detection as a parameter-free convex optimization problem for single sensors and sensor arrays. We evaluate the performance of the TREFEX algorithm experimentally for different metal oxide sensors and several gas emission profiles. A comparison with the previously proposed GLR method shows a clearly superior performance of the TREFEX algorithm both in detection performance and in estimating the change time. PMID:23736853
de Lima Barros, Alessandra Maciel; do Carmo Sobral, Maria; Gunkel, Günter
2013-01-01
Emissions of pollutants and nutrients are causing several problems in aquatic ecosystems, and in general an excess of nutrients, specifically nitrogen and phosphorus, is responsible for the eutrophication process in water bodies. In most developed countries, more attention is given to diffuse pollution because problems with point pollution have already been solved. In many non-developed countries basic data for point and diffuse pollution are not available. The focus of the presented studies is to quantify nutrient emissions from point and diffuse sources in the Ipojuca river basin, Pernambuco State, Brazil, using the Moneris model (Modelling Nutrient Emissions in River Systems). This model has been developed in Germany and has already been implemented in more than 600 river basins. The model is mainly based on river flow, water quality and geographical information system data. According to the Moneris model results, untreated domestic sewage is the major source of nutrients in the Ipojuca river basin. The Moneris model has shown itself to be a useful tool that allows the identification and quantification of point and diffuse nutrient sources, thus enabling the adoption of measures to reduce them. The Moneris model, conducted for the first time in a tropical river basin with intermittent flow, can be used as a reference for implementation in other watersheds.
de Chastelaine, Marianne; Friedman, David; Cycowicz, Yael M
2007-08-01
Improvement in source memory performance throughout childhood is thought to be mediated by the development of executive control. As postretrieval control processes may be better time-locked to the recognition response rather than the retrieval cue, the development of processes underlying source memory was investigated with both stimulus- and response-locked event-related potentials (ERPs). These were recorded in children, adolescents, and adults during a recognition memory exclusion task. Green- and red-outlined pictures were studied, but were tested in black outline. The test requirement was to endorse old items shown in one study color ("targets") and to reject new items along with old items shown in the alternative study color ("nontargets"). Source memory improved with age. All age groups retrieved target and nontarget memories as reflected by reliable parietal episodic memory (EM) effects, a stimulus-locked ERP correlate of recollection. Response-locked ERPs to targets and nontargets diverged in all groups prior to the response, although this occurred at an increasingly earlier time point with age. We suggest these findings reflect the implementation of attentional control mechanisms to enhance target memories and facilitate response selection with the greatest and least success, respectively, in adults and children. In adults only, response-locked ERPs revealed an early-onsetting parietal negativity for nontargets, but not for targets. This was suggested to reflect adults' ability to consistently inhibit prepotent target responses for nontargets. The findings support the notion that the development of source memory relies on the maturation of control processes that serve to enhance accurate selection of task-relevant memories.
Nakahara, Hisashi; Haney, Matt
2015-01-01
Recently, various methods have been proposed and applied for earthquake source imaging, and theoretical relationships among the methods have been studied. In this study, we make a follow-up theoretical study to better understand the meanings of earthquake source imaging. For imaging problems, the point spread function (PSF) is used to describe the degree of blurring and degradation in an obtained image of a target object as a response of an imaging system. In this study, we formulate PSFs for earthquake source imaging. By calculating the PSFs, we find that waveform source inversion methods remove the effect of the PSF and are free from artifacts. However, the other source imaging methods are affected by the PSF and suffer from the effect of blurring and degradation due to the restricted distribution of receivers. Consequently, careful treatment of the effect is necessary when using the source imaging methods other than waveform inversions. Moreover, the PSF for source imaging is found to have a link with seismic interferometry with the help of the source-receiver reciprocity of Green’s functions. In particular, the PSF can be related to Green’s function for cases in which receivers are distributed so as to completely surround the sources. Furthermore, the PSF acts as a low-pass filter. Given these considerations, the PSF is quite useful for understanding the physical meaning of earthquake source imaging.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
Mannes, Trish; Gupta, Leena; Craig, Adam; Rosewell, Alexander; McGuinness, Clancy Aimers; Musto, Jennie; Shadbolt, Craig; Biffin, Brian
2010-03-01
This report describes the investigation and public health response to a large point-source outbreak of salmonellosis in Sydney, Australia. The case-series investigation involved telephone interviews with 283 cases or their guardians and active surveillance through hospitals, general practitioners, laboratories and the public health network. In this outbreak 319 cases of gastroenteritis were identified, of which 221 cases (69%) presented to a hospital emergency department and 136 (43%) required hospital admission. This outbreak was unique in its scale and severity and the surge capacity of hospital emergency departments was stretched. It highlights that foodborne illness outbreaks can cause substantial preventable morbidity and resultant health service burden, requiring close attention to regulatory and non-regulatory interventions.
NASA Astrophysics Data System (ADS)
Hu, Y.; Ji, Y.; Egbert, G. D.
2015-12-01
The fictitious time domain method (FTD), based on the correspondence principle for wave and diffusion fields, has been developed and used over the past few years primarily for marine electromagnetic (EM) modeling. Here we present results of our efforts to apply the FTD approach to land and airborne TEM problems which can reduce the computer time several orders of magnitude and preserve high accuracy. In contrast to the marine case, where sources are in the conductive sea water, we must model the EM fields in the air; to allow for topography air layers must be explicitly included in the computational domain. Furthermore, because sources for most TEM applications generally must be modeled as finite loops, it is useful to solve directly for the impulse response appropriate to the problem geometry, instead of the point-source Green functions typically used for marine problems. Our approach can be summarized as follows: (1) The EM diffusion equation is transformed to a fictitious wave equation. (2) The FTD wave equation is solved with an explicit finite difference time-stepping scheme, with CPML (Convolutional PML) boundary conditions for the whole computational domain including the air and earth , with FTD domain source corresponding to the actual transmitter geometry. Resistivity of the air layers is kept as low as possible, to compromise between efficiency (longer fictitious time step) and accuracy. We have generally found a host/air resistivity contrast of 10-3 is sufficient. (3)A "Modified" Fourier Transform (MFT) allow us recover system's impulse response from the fictitious time domain to the diffusion (frequency) domain. (4) The result is multiplied by the Fourier transformation (FT) of the real source current avoiding time consuming convolutions in the time domain. (5) The inverse FT is employed to get the final full waveform and full time response of the system in the time domain. In general, this method can be used to efficiently solve most time-domain EM simulation problems for non-point sources.
Shunt-Enhanced, Lead-Driven Bifurcation of Epilayer GaAs based EEC Sensor Responsivity
NASA Astrophysics Data System (ADS)
Solin, Stuart; Werner, Fletcher
2015-03-01
The results reported here explore the geometric optimization of room-temperature EEC sensor responsivity to applied bias by exploring contact geometry and location. The EEC sensor structure resembles that of a MESFET, but the measurement technique and operation distinguish the EEC sensor significantly; the EEC sensor employs a four-point resistance measurement as opposed to a two-point source-drain measurement and is operated under both forward and reverse bias. Under direct forward bias, the sensor distinguishes itself from a traditional FET by allowing current to be injected from the gate, referred to as a shunt, into the active layer. We show that the observed bifurcation in EEC sensor response to direct reverse bias depends critically on measurement lead location. A dramatic enhancement in responsivity is achieved via a modification of the shunt geometry. A maximum percent change of 130,856% of the four-point resistance was achieved under a direct reverse bias of -1V using an enhanced shunt design, a 325 fold increase over the conventional EEC square shunt design. This result was accompanied by an observed bifurcation in sensor response, driven by a rotation of the four-point measurement leads. S. A. S is a co-founder of and has a financial interest in PixelEXX, a start-up company whose mission is to market imaging arrays.
NASA Astrophysics Data System (ADS)
Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis
2003-02-01
The response at the surface of an isotropic viscoelastic medium to buried fundamental acoustic sources is studied theoretically, computationally and experimentally. Finite and infinitesimal monopole and dipole sources within the low audible frequency range (40-400 Hz) are considered. Analytical and numerical integral solutions that account for compression, shear and surface wave response to the buried sources are formulated and compared with numerical finite element simulations and experimental studies on finite dimension phantom models. It is found that at low audible frequencies, compression and shear wave propagation from point sources can both be significant, with shear wave effects becoming less significant as frequency increases. Additionally, it is shown that simple closed-form analytical approximations based on an infinite medium model agree well with numerically obtained ``exact'' half-space solutions for the frequency range and material of interest in this study. The focus here is on developing a better understanding of how biological soft tissue affects the transmission of vibro-acoustic energy from biological acoustic sources below the skin surface, whose typical spectral content is in the low audible frequency range. Examples include sound radiated from pulmonary, gastro-intestinal and cardiovascular system functions, such as breath sounds, bowel sounds and vascular bruits, respectively.
MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.E.; Baker, M.C.
1999-07-25
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less
The Influences of Lamination Angles on the Interior Noise Levels of an Aircraft
NASA Technical Reports Server (NTRS)
Fernholz, Christian M.; Robinson, Jay H.
1996-01-01
The feasibility of reducing the interior noise levels of an aircraft passenger cabin through optimization of the composite lay up of the fuselage is investigated. MSC/NASTRAN, a commercially available finite element code, is used to perform the dynamic analysis and subsequent optimization of the fuselage. The numerical calculation of sensitivity of acoustic pressure to lamination angle is verified using a simple thin, cylindrical shell with point force excitations as noise sources. The thin shell used represents a geometry similar to the fuselage and analytic solutions are available for the cylindrical thin shell equations of motion. Optimization of lamination angle for the reduction of interior noise is performed using a finite element model of an actual aircraft fuselage. The aircraft modeled for this study is the Beech Starship. Point forces simulate the structure borne noise produced by the engines and are applied to the fuselage at the wing mounting locations. These forces are the noise source for the optimization problem. The acoustic pressure response is reduced at a number of points in the fuselage and over a number of frequencies. The objective function is minimized with the constraint that it be larger than the maximum sound pressure level at the response points in the passenger cabin for all excitation frequencies in the range of interest. Results from the study of the fuselage model indicate that a reduction in interior noise levels is possible over a finite frequency range through optimal configuration of the lamination angles in the fuselage. Noise reductions of roughly 4 dB were attained. For frequencies outside the optimization range, the acoustic pressure response may increase after optimization. The effects of changing lamination angle on the overall structural integrity of the airframe are not considered in this study.
Exploring the Variability of the Fermi LAT Blazar Population
NASA Astrophysics Data System (ADS)
Macomb, Daryl J.; Shrader, C. R.
2014-01-01
The flux variability of the approximately 2000 point sources cataloged by the Fermi Gamma-Ray Space Telescope provide important clues to population characteristics. This is particularly true of the more than 1100 source that are likely AGN. By characterizing the intrinsic flux variability and distinguishing this variability from flaring behavior, we can better address questions of flare amplitudes, durations, recurrence times, and temporal profiles. A better understanding of the responsible physical environments, such as the scale and location of jet structures responsible for the high-energy emission, may emerge from such studies. Assessing these characteristics as a function of blazar sub-class is a further goal in order to address questions about the fundamentals of blazar AGN physics. Here we report on progress made in categorizing blazar flare behavior, and correlate these behaviors with blazar sub-type and other source parameters.
NASA Astrophysics Data System (ADS)
Yu, Xiaojun; Liu, Xinyu; Chen, Si; Wang, Xianghong; Liu, Linbo
2016-03-01
High-resolution optical coherence tomography (OCT) is of critical importance to disease diagnosis because it is capable of providing detailed microstructural information of the biological tissues. However, a compromise usually has to be made between its spatial resolutions and sensitivity due to the suboptimal spectral response of the system components, such as the linear camera, the dispersion grating, and the focusing lenses, etc. In this study, we demonstrate an OCT system that achieves both high spatial resolutions and enhanced sensitivity through utilizing a spectrally encoded source. The system achieves a lateral resolution of 3.1 μm and an axial resolution of 2.3 μm in air; when with a simple dispersive prism placed in the infinity space of the sample arm optics, the illumination beam on the sample is transformed into a line source with a visual angle of 10.3 mrad. Such an extended source technique allows a ~4 times larger maximum permissible exposure (MPE) than its point source counterpart, which thus improves the system sensitivity by ~6dB. In addition, the dispersive prism can be conveniently switched to a reflector. Such flexibility helps increase the penetration depth of the system without increasing the complexity of the current point source devices. We conducted experiments to characterize the system's imaging capability using the human fingertip in vivo and the swine eye optic never disc ex vivo. The higher penetration depth of such a system over the conventional point source OCT system is also demonstrated in these two tissues.
Research on the measurement of the ultraviolet irradiance in the xenon lamp aging test chamber
NASA Astrophysics Data System (ADS)
Ji, Muyao; Li, Tiecheng; Lin, Fangsheng; Yin, Dejin; Cheng, Weihai; Huang, Biyong; Lai, Lei; Xia, Ming
2018-01-01
This paper briefly introduces the methods of calibrating the irradiance in the Xenon lamp aging test chamber. And the irradiance under ultraviolet region is mainly researched. Three different detectors whose response wave range are respectively UVA (320 400nm), UVB (275 330nm) and UVA+B (280 400nm) are used in the experiment. Through comparing the measuring results with different detectors under the same xenon lamp source, we discuss the difference between UVA, UVB and UVA+B on the basis of the spectrum of the xenon lamp and the response curve of the detectors. We also point out the possible error source, when use these detectors to calibrate the chamber.
The Galactic Center: A Petaelectronvolt Cosmic-ray Acceleration Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi-Qing; Tian, Zhen; Wang, Zhen
2017-02-20
The multiteraelectronvolt γ -rays from the galactic center (GC) have a cutoff at tens of teraelectronvolts, whereas the diffuse emission has no such cutoff, which is regarded as an indication of petaelectronvolt proton acceleration by the HESS experiment. It is important to understand the inconsistency and study the possibility that petaelectronvolt cosmic-ray acceleration could account for the apparently contradictory point and diffuse γ -ray spectra. In this work, we propose that the cosmic rays are accelerated up to greater than petaelectronvolts in the GC. The interaction between cosmic rays and molecular clouds is responsible for the multiteraelectronvolt γ -ray emissionsmore » from both the point and diffuse sources today. Enhanced by the small volume filling factor (VFF) of the clumpy structure, the absorption of the γ -rays leads to a sharp cutoff spectrum at tens of teraelectronvolts produced in the GC. Away from the GC, the VFF grows, and the absorption enhancement becomes negligible. As a result, the spectra of γ -ray emissions for both point and diffuse sources can be successfully reproduced under such a self-consistent picture. In addition, a “surviving tail” at ∼100 TeV is expected from the point source, which can be observed by future projects CTA and LHAASO. Neutrinos are simultaneously produced during proton-proton (PP) collision. With 5–10 years of observations, the KM3Net experiment will be able to detect the petaelectronvolt source according to our calculation.« less
Hanjabam, Mandakini Devi; Kannaiyan, Sathish Kumar; Kamei, Gaihiamngam; Jakhar, Jitender Kumar; Chouksey, Mithlesh Kumar; Gudipati, Venkateshwarlu
2015-02-01
Physical properties of gelatin extracted from Unicorn leatherjacket (Aluterus monoceros) skin, which is generated as a waste from fish processing industries, were optimised using Response Surface Methodology (RSM). A Box-Behnken design was used to study the combined effects of three independent variables, namely phosphoric acid (H3PO4) concentration (0.15-0.25 M), extraction temperature (40-50 °C) and extraction time (4-12 h) on different responses like yield, gel strength and melting point of gelatin. The optimum conditions derived by RSM for the yield (10.58%) were 0.2 M H3PO4 for 9.01 h of extraction time and hot water extraction of 45.83 °C. The maximum achieved gel strength and melting point was 138.54 g and 22.61 °C respectively. Extraction time was found to be most influencing variable and had a positive coefficient on yield and negative coefficient on gel strength and melting point. The results indicated that Unicorn leatherjacket skins can be a source of gelatin having mild gel strength and melting point.
NASA Technical Reports Server (NTRS)
Fishman, Jack; Gregory, Gerald L.; Sachse, Glen W.; Beck, Sherwin M.; Hill, Gerald F.
1987-01-01
A set of 14 pairs of vertical profiles of ozone and carbon monoxide, obtained with fast-response instrumentation, is presented. Most of these profiles, which were measured in the remote troposphere, also have supporting fast-response dew-point temperature profiles. The data suggest that the continental boundary layer is a source of tropospheric ozone, even in October and November, when photochemical activity should be rather small. In general, the small-scale vertical variability between CO and O3 is in phase. At low latitudes this relationship defines levels in the atmosphere where midlatitude air is being transported to lower latitudes, since lower dew-point temperatures accompany these higher CO and O3 concentrations. A set of profiles which is suggestive of interhemispheric transport is also presented. Independent meteorological analyses support these interpretations.
Strategies for lidar characterization of particulates from point and area sources
NASA Astrophysics Data System (ADS)
Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry
2010-10-01
Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (<10 m) and high sensitivity. Energy Dynamics Laboratory and Utah State University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.
The local density of optical states of a metasurface
NASA Astrophysics Data System (ADS)
Lunnemann, Per; Koenderink, A. Femius
2016-02-01
While metamaterials are often desirable for near-field functions, such as perfect lensing, or cloaking, they are often quantified by their response to plane waves from the far field. Here, we present a theoretical analysis of the local density of states near lattices of discrete magnetic scatterers, i.e., the response to near field excitation by a point source. Based on a pointdipole theory using Ewald summation and an array scanning method, we can swiftly and semi-analytically evaluate the local density of states (LDOS) for magnetoelectric point sources in front of an infinite two-dimensional (2D) lattice composed of arbitrary magnetoelectric dipole scatterers. The method takes into account radiation damping as well as all retarded electrodynamic interactions in a self-consistent manner. We show that a lattice of magnetic scatterers evidences characteristic Drexhage oscillations. However, the oscillations are phase shifted relative to the electrically scattering lattice consistent with the difference expected for reflection off homogeneous magnetic respectively electric mirrors. Furthermore, we identify in which source-surface separation regimes the metasurface may be treated as a homogeneous interface, and in which homogenization fails. A strong frequency and in-plane position dependence of the LDOS close to the lattice reveals coupling to guided modes supported by the lattice.
Proteome analysis of yeast response to various nutrient limitations
Kolkman, Annemieke; Daran-Lapujade, Pascale; Fullaondo, Asier; Olsthoorn, Maurien M A; Pronk, Jack T; Slijper, Monique; Heck, Albert J R
2006-01-01
We compared the response of Saccharomyces cerevisiae to carbon (glucose) and nitrogen (ammonia) limitation in chemostat cultivation at the proteome level. Protein levels were differentially quantified using unlabeled and 15N metabolically labeled yeast cultures. A total of 928 proteins covering a wide range of isoelectric points, molecular weights and subcellular localizations were identified. Stringent statistical analysis identified 51 proteins upregulated in response to glucose limitation and 51 upregulated in response to ammonia limitation. Under glucose limitation, typical glucose-repressed genes encoding proteins involved in alternative carbon source utilization, fatty acids β-oxidation and oxidative phosphorylation displayed an increased protein level. Proteins upregulated in response to nitrogen limitation were mostly involved in scavenging of alternative nitrogen sources and protein degradation. Comparison of transcript and protein levels clearly showed that upregulation in response to glucose limitation was mainly transcriptionally controlled, whereas upregulation in response to nitrogen limitation was essentially controlled at the post-transcriptional level by increased translational efficiency and/or decreased protein degradation. These observations underline the need for multilevel analysis in yeast systems biology. PMID:16738570
NASA Astrophysics Data System (ADS)
Sabra, K.
2006-12-01
The random nature of noise and scattered fields tends to suggest limited utility. Indeed, seismic or acoustic fields from random sources or scatterers are often considered to be incoherent, but there is some coherence between two sensors that receive signals from the same individual source or scatterer. An estimate of the Green's function (or impulse response) between two points can be obtained from the cross-correlation of random wavefields recorded at these two points. Recent theoretical and experimental studies in ultrasonics, underwater acoustics, structural monitoring and seismology have investigated this technique in various environments and frequency ranges. These results provide a means for passive imaging using only the random wavefields, without the use of active sources. The coherent wavefronts emerge from a correlation process that accumulates contributions over time from random sources whose propagation paths pass through both receivers. Results will be presented from experiments using ambient noise cross-correlations for the following applications: 1) passive surface waves tomography from ocean microseisms and 2) structural health monitoring of marine and airborne structures embedded in turbulent flow.
Land use change, and the implementation of best management practices to remedy the adverse effects of land use change, alter hydrologic patterns, contaminant loading and water quality in freshwater ecosystems. These changes are not constant over time, but vary in response to di...
Exact solutions for sound radiation from a moving monopole above an impedance plane.
Ochmann, Martin
2013-04-01
The acoustic field of a monopole source moving with constant velocity at constant height above an infinite locally reacting plane can be expressed in analytical form by combining the Lorentz transformation with the method of superimposing complex or real point sources. For a plane with masslike response, the solution in Lorentz space consists of a superposition of monopoles only and therefore, does not differ in principle from the solution for the corresponding stationary boundary value problem. However, by considering a frequency independent surface impedance, e.g., with pure absorbing behavior, the half-space Green's function is now comprised of not only a line of monopoles but also of dipoles. For certain field points at a special line g, this solution can be written explicitly by using an exponential integral. For arbitrary field points, the method of stationary phase leads to an asymptotic solution for the reflection coefficient which agrees with prior results from the literature.
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
Site correction of stochastic simulation in southwestern Taiwan
NASA Astrophysics Data System (ADS)
Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan
2014-05-01
Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.
Costas, Rodrigo; Franssen, Thomas
2018-01-01
In a recent Letter to the Editor Teixeira da Silva and Dobránszki (2018) present a discussion of the issues regarding the h-index as an indicator for the evaluation of individual scholars, particularly in the current landscape of the proliferation of online sources that provide individual level bibliometric indicators. From our point of view, the issues surrounding the h-index go far beyond the problems mentioned by TSD. In this letter we provide some overview of this, mostly by expanding TSD's original argument and discussing more conceptual and global issues related to the indicator, particularly in the outlook of a strong proliferation of online sources providing individual researcher indicators. Our discussion focuses on the h-index and the profusion of sources providing it, but we emphasize that many of our points are of a more general nature, and would be equally relevant for other indicators that reach the same level of popularity as the h-index.
Transcriptional response of Pasteurella multocida to defined iron sources.
Paustian, Michael L; May, Barbara J; Cao, Dongwei; Boley, Daniel; Kapur, Vivek
2002-12-01
Pasteurella multocida was grown in iron-free chemically defined medium supplemented with hemoglobin, transferrin, ferritin, and ferric citrate as iron sources. Whole-genome DNA microarrays were used to monitor global gene expression over seven time points after the addition of the defined iron source to the medium. This resulted in a set of data containing over 338,000 gene expression observations. On average, 12% of P. multocida genes were differentially expressed under any single condition. A majority of these genes encoded P. multocida proteins that were involved in either transport and binding or were annotated as hypothetical proteins. Several trends are evident when the data from different iron sources are compared. In general, only two genes (ptsN and sapD) were expressed at elevated levels under all of the conditions tested. The results also show that genes with increased expression in the presence of hemoglobin did not respond to transferrin or ferritin as an iron source. Correspondingly, genes with increased expression in the transferrin and ferritin experiments were expressed at reduced levels when hemoglobin was supplied as the sole iron source. Finally, the data show that genes that were most responsive to the presence of ferric citrate did not follow a trend similar to that of the other iron sources, suggesting that different pathways respond to inorganic or organic sources of iron in P. multocida. Taken together, our results demonstrate that unique subsets of P. multocida genes are expressed in response to different iron sources and that many of these genes have yet to be functionally characterized.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Maslov, Mikhail Y.; Edelman, Elazer R.; Pezone, Matthew J.; Wei, Abraham E.; Wakim, Matthew G.; Murray, Michael R.; Tsukada, Hisashi; Gerogiannis, Iraklis S.; Groothuis, Adam; Lovich, Mark A.
2014-01-01
Prior studies in small mammals have shown that local epicardial application of inotropic compounds drives myocardial contractility without systemic side effects. Myocardial capillary blood flow, however, may be more significant in larger species than in small animals. We hypothesized that bulk perfusion in capillary beds of the large mammalian heart enhances drug distribution after local release, but also clears more drug from the tissue target than in small animals. Epicardial (EC) drug releasing systems were used to apply epinephrine to the anterior surface of the left heart of swine in either point-sourced or distributed configurations. Following local application or intravenous (IV) infusion at the same dose rates, hemodynamic responses, epinephrine levels in the coronary sinus and systemic circulation, and drug deposition across the ventricular wall, around the circumference and down the axis, were measured. EC delivery via point-source release generated transmural epinephrine gradients directly beneath the site of application extending into the middle third of the myocardial thickness. Gradients in drug deposition were also observed down the length of the heart and around the circumference toward the lateral wall, but not the interventricular septum. These gradients extended further than might be predicted from simple diffusion. The circumferential distribution following local epinephrine delivery from a distributed source to the entire anterior wall drove drug toward the inferior wall, further than with point-source release, but again, not to the septum. This augmented drug distribution away from the release source, down the axis of the left ventricle, and selectively towards the left heart follows the direction of capillary perfusion away from the anterior descending and circumflex arteries, suggesting a role for the coronary circulation in determining local drug deposition and clearance. The dominant role of the coronary vasculature is further suggested by the elevated drug levels in the coronary sinus effluent. Indeed, plasma levels, hemodynamic responses, and myocardial deposition remote from the point of release were similar following local EC or IV delivery. Therefore, the coronary vasculature shapes the pharmacokinetics of local myocardial delivery of small catecholamine drugs in large animal models. Optimal design of epicardial drug delivery systems must consider the underlying bulk capillary perfusion currents within the tissue to deliver drug to tissue targets and may favor therapeutic molecules with better potential retention in myocardial tissue. PMID:25234821
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
Zhang, Xiao-Jian; Chen, Chao; Lin, Peng-Fei; Hou, Ai-Xin; Niu, Zhang-Bin; Wang, Jun
2011-01-01
China has suffered frequent source water contamination accidents in the past decade, which has resulted in severe consequences to the water supply of millions of residents. The origins of typical cases of contamination are discussed in this paper as well as the emergency response to these accidents. In general, excessive pursuit of rapid industrialization and the unreasonable location of factories are responsible for the increasing frequency of accidental pollution events. Moreover, insufficient attention to environmental protection and rudimentary emergency response capability has exacerbated the consequences of such accidents. These environmental accidents triggered or accelerated the promulgation of stricter environmental protection policy and the shift from economic development mode to a more sustainable direction, which should be regarded as the turning point of environmental protection in China. To guarantee water security, China is trying to establish a rapid and effective emergency response framework, build up the capability of early accident detection, and develop efficient technologies to remove contaminants from water.
Focusing and directional beaming effects of airborne sound through a planar lens with zigzag slits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Kun; Qiu, Chunyin, E-mail: cyqiu@whu.edu.cn; Lu, Jiuyang
2015-01-14
Based on the Huygens-Fresnel principle, we design a planar lens to efficiently realize the interconversion between the point-like sound source and Gaussian beam in ambient air. The lens is constructed by a planar plate perforated elaborately with a nonuniform array of zigzag slits, where the slit exits act as subwavelength-sized secondary sources carrying desired sound responses. The experiments operated at audible regime agree well with the theoretical predictions. This compact device could be useful in daily life applications, such as for medical and detection purposes.
Pfesteria-like toxic- blooms have been implicated as the causative agent responsible for numerous outbreaks of fish lesions and fish kills in the Mid-Atlantic and southeastern U.S. An increase in frequency, intensity, and severity of toxic blooms in recent years is though...
Nutrient budgets of two watersheds on the Fernow Experimental Forest
M. B. Adams; J. N. Kochenderfer; T. R. Angradi; P. J. Edwards
1995-01-01
Acidic deposition is an important non-point source pollutant in the Central Appalachian region that is responsible for elevated nitrogen (N) and sulfur (S) inputs to forest ecosystems. Nitrogen and calcium (Ca) budgets and plant tissue concentrations were compared for two watersheds, one that received three years of an artificial acidification treatment and an adjacent...
A Smart Power Electronic Multiconverter for the Residential Sector.
Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva
2017-05-26
The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it.
A Smart Power Electronic Multiconverter for the Residential Sector
Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva
2017-01-01
The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it. PMID:28587131
1984-12-01
total sum of squares at the center points minus the correction factor for the mean at the center points ( SSpe =Y’Y-nlY), where n1 is the number of...SSlac=SSres- SSpe ). The sum of squares due to pure error estimates 0" and the sum of squares due to lack-of-fit estimates 0’" plus a bias term if...Response Surface Methodology Source d.f. SS MS Regression n b’X1 Y b’XVY/n Residual rn-n Y’Y-b’X’ *Y (Y’Y-b’X’Y)/(n-n) Pure Error ni-i Y’Y-nl1Y SSpe / (ni
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
Household water insecurity and its cultural dimensions: preliminary results from Newtok, Alaska.
Eichelberger, Laura
2017-06-21
Using a relational approach, I examine several cultural dimensions involved in household water access and use in Newtok, Alaska. I describe the patterns that emerge around domestic water access and use, as well as the subjective lived experiences of water insecurity including risk perceptions, and the daily work and hydro-social relationships involved in accessing water from various sources. I found that Newtok residents haul water in limited amounts from a multitude of sources, both treated and untreated, throughout the year. Household water access is tied to hydro-social relationships predicated on sharing and reciprocity, particularly when the primary treated water access point is unavailable. Older boys and young men are primarily responsible for hauling water, and this role appears to be important to male Yupik identity. Many interviewees described preferring to drink untreated water, a practice that appears related to cultural constructions of natural water sources as pure and self-purifying, as well as concerns about the safety of treated water. Concerns related to the health consequences of low water access appear to differ by gender and age, with women and elders expressing greater concern than men. These preliminary results point to the importance of understanding the cultural dimensions involved in household water access and use. I argue that institutional responses to water insecurity need to incorporate such cultural dimensions into solutions aimed at increasing household access to and use of water.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Photon absorption potential coefficient as a tool for materials engineering
NASA Astrophysics Data System (ADS)
Akande, Raphael Oluwole; Oyewande, Emmanuel Oluwole
2016-09-01
Different atoms achieve ionizations at different energies. Therefore, atoms are characterized by different responses to photon absorption in this study. That means there exists a coefficient for their potential for photon absorption from a photon source. In this study, we consider the manner in which molecular constituents (atoms) absorb photon from a photon source. We observe that there seems to be a common pattern of variation in the absorption of photon among the electrons in all atoms on the periodic table. We assume that the electrons closest to the nucleus (En) and the electrons closest to the outside of the atom (Eo) do not have as much potential for photon absorption as the electrons at the middle of the atom (Em). The explanation we give to this effect is that the En electrons are embedded within the nuclear influence, and similarly, Eo electrons are embedded within the influence of energies outside the atom that there exists a low potential for photon absorption for them. Unlike En and Eo, Em electrons are conditioned, such that there is a quest for balance between being influenced either by the nuclear force or forces external to the atom. Therefore, there exists a higher potential for photon absorption for Em electrons than for En and Eo electrons. The results of our derivations and analysis always produce a bell-shaped curve, instead of an increasing curve as in the ionization energies, for all elements in the periodic table. We obtained a huge data of PAPC for each of the several materials considered. The point at which two or more PAPC values cross one another is termed to be a region of conflicting order of ionization, where all the atoms absorb equal portion of the photon source at the same time. At this point, a greater fraction of the photon source is pumped into the material which could lead to an explosive response from the material. In fact, an unimaginable and unreported phenomenon (in physics) could occur, when two or more PAPCs cross, and the material is able to absorb more than that the photon source could provide, at this point. These resulting effects might be of immense materials engineering applications.
Ockenden, M C; Quinton, J N; Favaretto, N; Deasy, C; Surridge, B
2014-07-01
Surface water quality in the UK and much of Western Europe has improved in recent decades, in response to better point source controls and the regulation of fertilizer, manure and slurry use. However, diffuse sources of pollution, such as leaching or runoff of nutrients from agricultural fields, and micro-point sources including farmyards, manure heaps and septic tank sewerage systems, particularly systems without soil adsorption beds, are now hypothesised to contribute a significant proportion of the nutrients delivered to surface watercourses. Tackling such sources in an integrated manner is vital, if improvements in freshwater quality are to continue. In this research, we consider the combined effect of constructing small field wetlands and improving a septic tank system on stream water quality within an agricultural catchment in Cumbria, UK. Water quality in the ditch-wetland system was monitored by manual sampling at fortnightly intervals (April-October 2011 and February-October 2012), with the septic tank improvement taking place in February 2012. Reductions in nutrient concentrations were observed through the catchment, by up to 60% when considering total phosphorus (TP) entering and leaving a wetland with a long residence time. Average fluxes of TP, soluble reactive phosphorus (SRP) and ammonium-N (NH4-N) at the head of the ditch system in 2011 (before septic tank improvement) compared to 2012 (after septic tank improvement) were reduced by 28%, 9% and 37% respectively. However, TP concentration data continue to show a clear dilution with increasing flow, indicating that the system remained point source dominated even after the septic tank improvement.
Linearization of Positional Response Curve of a Fiber-optic Displacement Sensor
NASA Astrophysics Data System (ADS)
Babaev, O. G.; Matyunin, S. A.; Paranin, V. D.
2018-01-01
Currently, the creation of optical measuring instruments and sensors for measuring linear displacement is one of the most relevant problems in the area of instrumentation. Fiber-optic contactless sensors based on the magneto-optical effect are of special interest. They are essentially contactless, non-electrical and have a closed optical channel not subject to contamination. The main problem of this type of sensors is the non-linearity of their positional response curve due to the hyperbolic nature of the magnetic field intensity variation induced by moving the magnetic source mounted on the controlled object relative to the sensing element. This paper discusses an algorithmic method of linearizing the positional response curve of fiber-optic displacement sensors in any selected range of the displacements to be measured. The method is divided into two stages: 1 - definition of the calibration function, 2 - measurement and linearization of the positional response curve (including its temperature stabilization). The algorithm under consideration significantly reduces the number of points of the calibration function, which is essential for the calibration of temperature dependence, due to the use of the points that randomly deviate from the grid points with uniform spacing. Subsequent interpolation of the deviating points and piecewise linear-plane approximation of the calibration function reduces the microcontroller storage capacity for storing the calibration function and the time required to process the measurement results. The paper also presents experimental results of testing real samples of fiber-optic displacement sensors.
Managing commercial and light-industrial discharges to POTWs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fink, R.G.
1993-02-01
Discharging commercial and light-industrial wastewater to a publicly owned treatment works (POTW) is risky business. Pretreating wastewater using traditional methods may leave a wastestream's originator vulnerable to fines, civil and criminal punishment, cleanup costs, and cease-and-desist orders. EPA has tightened regulations applying to discharges from POTWs, which, in turn, are looking to industrial and commercial discharge sources to determine responsibility for toxic contaminants. Although EPA in the past focused on large point sources of contamination, the Agency has shifted its emphasis to smaller and more diverse nonpoint sources. One result is that POTWs no longer act as buffers for light-industrialmore » and commercial wastewater dischargers.« less
Doré, Marie-Claire; Caza, Nicole; Gingras, Nathalie; Rouleau, Nancie
2007-11-01
Findings from the literature consistently revealed episodic memory deficits in adolescents with psychosis. However, the nature of the dysfunction remains unclear. Based on a cognitive neuropsychological approach, a theoretically driven paradigm was used to generate valid interpretations about the underlying memory processes impaired in these patients. A total of 16 inpatient adolescents with psychosis and 19 individually matched controls were assessed using an experimental task designed to measure memory for source and temporal context of studied words. Retrospective confidence judgements for source and temporal context responses were also assessed. On word recognition, patients had more difficulty than controls discriminating target words from neutral distractors. In addition, patients identified both source and temporal context features of recognised items less often than controls. Confidence judgements analyses revealed that the difference between the proportions of correct and incorrect responses made with high confidence was lower in patients than in controls. In addition, the proportion of high-confident responses that were errors was higher in patients compared to controls. These findings suggest impaired relational binding processes in adolescents with psychosis, resulting in a difficulty to create unified memory representations. Our findings on retrospective confidence data point to impaired monitoring of retrieved information that may also impair memory performance in these individuals.
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
General eigenstates of Maxwell's equations in a two-constituent composite medium
NASA Astrophysics Data System (ADS)
Bergman, David J.; Farhi, Asaf
2016-11-01
Eigenstates of Maxwell's equations in the quasistatic regime were used recently to calculate the response of a Veselago Lens1 to the field produced by a time dependent point electric charge.2, 3 More recently, this approach was extended to calculate the non-quasistatic response of such a lens. This necessitated a calculation of the eigenstates of the full Maxwell equations in a flat slab structure where the electric permittivity ɛ1 of the slab differs from the electric permittivity ɛ2 of its surroundings while the magnetic permeability is equal to 1 everywhere.4 These eigenstates were used to calculate the response of a Veselago Lens to an oscillating point electric dipole source of electromagnetic (EM) waves. A result of these calculations was that, although images with subwavelength resolution are achievable, as first predicted by John Pendry,5 those images appear not at the points predicted by geometric optics. They appear, instead, at points which lie upon the slab surfaces. This is strongly connected to the fact that when ɛ1/ɛ2 = -1 a strong singularity occurs in Maxwell's equations: This value of ɛ1/ɛ2 is a mathemetical accumulation point for the EM eigenvalues.6 Unfortunately, many physicists are unaware of this crucial mathematical property of Maxwell's equations. In this article we describe how the non-quasistatic eigenstates of Maxwell's equations in a composite microstructure can be calculated for general two-constituent microstructures, where both ɛ and μ have different values in the two constituents.
Response of two semiarid grasslands to a second fire application
Carleton S. White; Rosemary L. Pendleton; Burton K. Pendleton
2006-01-01
Prescribed fire was used in two semiarid grasslands to reduce shrub cover, promote grass production, and reduce erosional loss that represents a potential nonÂpoint-source of sediment to degrade water quality. This study measured transported soil sediment, dynamics in soil surface microtopography, cover of the woody shrub, grass, and bare ground cover classes, and soil...
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
Ghannam, K; El-Fadel, M
2013-02-01
This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.
Point to point multispectral light projection applied to cultural heritage
NASA Astrophysics Data System (ADS)
Vázquez, D.; Alvarez, A.; Canabal, H.; Garcia, A.; Mayorga, S.; Muro, C.; Galan, T.
2017-09-01
Use of new of light sources based on LED technology should allow the develop of systems that combine conservation and exhibition requirements and allow to make these art goods available to the next generations according to sustainability principles. The goal of this work is to develop light systems and sources with an optimized spectral distribution for each specific point of the art piece. This optimization process implies to maximize the color fidelity reproduction and the same time to minimize the photochemical damage. Perceived color under these sources will be similar (metameric) to technical requirements given by the restoration team uncharged of the conservation and exhibition of the goods of art. Depending of the fragility of the exposed art objects (i.e. spectral responsivity of the material) the irradiance must be kept under a critical level. Therefore, it is necessary to develop a mathematical model that simulates with enough accuracy both the visual effect of the illumination and the photochemical impact of the radiation. Spectral reflectance of a reference painting The mathematical model is based on a merit function that optimized the individual intensity of the LED-light sources taking into account the damage function of the material and color space coordinates. Moreover the algorithm used weights for damage and color fidelity in order to adapt the model to a specific museal application. In this work we show a sample of this technology applied to a picture of Sorolla (1863-1923) an important Spanish painter title "woman walking at the beach".
Control method for peak power delivery with limited DC-bus voltage
Edwards, John; Xu, Longya; Bhargava, Brij B.
2006-09-05
A method for driving a neutral point-clamped multi-level voltage source inverter supplying a synchronous motor is provided. A DC current is received at a neutral point-clamped multi-level voltage source inverter. The inverter has first, second, and third output nodes. The inverter also has a plurality of switches. A desired speed of a synchronous motor connected to the inverter by the first second and third nodes is received by the inverter. The synchronous motor has a rotor and the speed of the motor is defined by the rotational rate of the rotor. A position of the rotor is sensed, current flowing to the motor out of at least two of the first, second, and third output nodes is sensed, and predetermined switches are automatically activated by the inverter responsive to the sensed rotor position, the sensed current, and the desired speed.
Benítez-Páez, Alfonso; Gómez del Pulgar, Eva M.; Sanz, Yolanda
2017-01-01
Bacteroides spp. are dominant components of the phylum Bacteroidetes in the gut microbiota and prosper in glycan enriched environments. However, knowledge of the machinery of specific species isolated from humans (like Bacteroides uniformis) contributing to the utilization of dietary and endogenous sources of glycans and their byproducts is limited. We have used the cutting-edge nanopore-based technology to sequence the genome of B. uniformis CECT 7771, a human symbiont with a proven pre-clinical efficacy on metabolic and immune dysfunctions in obesity animal models. We have also used massive sequencing approaches to distinguish the genome expression patterns in response to carbon sources of different complexity during growth. At genome-wide level, our analyses globally demonstrate that B. uniformis strains exhibit an expanded glycolytic capability when compared with other Bacteroides species. Moreover, by studying the growth and whole-genome expression of B. uniformis CECT 7771 in response to different carbon sources, we detected a differential growth fitness and expression patterns across the genome depending on the carbon source of the culture media. The dietary fibers used exerted different effects on B. uniformis CECT 7771 activating different molecular pathways and, therefore, allowing the production of different metabolite types with potential impact on gut health. The genome and transcriptome analysis of B. uniformis CECT 7771, in response to different carbon sources, shows its high versatility to utilize both dietary and endogenous glycans along with the production of potentially beneficial end products for both the bacterium and the host, pointing to a mechanistic basis of a mutualistic relationship. PMID:28971068
Diffuse Waves and Energy Densities Near Boundaries
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Rodriguez-Castellanos, A.; Campillo, M.; Perton, M.; Luzon, F.; Perez-Ruiz, J. A.
2007-12-01
Green function can be retrieved from averaging cross correlations of motions within a diffuse field. In fact, it has been shown that for an elastic inhomogeneous, anisotropic medium under equipartitioned, isotropic illumination, the average cross correlations are proportional to the imaginary part of Green function. For instance coda waves are due to multiple scattering and their intensities follow diffusive regimes. Coda waves and the noise sample the medium and effectively carry information along their paths. In this work we explore the consequences of assuming both source and receiver at the same point. From the observable side, the autocorrelation is proportional to the energy density at a given point. On the other hand, the imaginary part of the Green function at the source itself is finite because the singularity of Green function is restricted to the real part. The energy density at a point is proportional with the trace of the imaginary part of Green function tensor at the source itself. The Green function availability may allow establishing the theoretical energy density of a seismic diffuse field generated by a background equipartitioned excitation. We study an elastic layer with free surface and overlaying a half space and compute the imaginary part of the Green function for various depths. We show that the resulting spectrum is indeed closely related to the layer dynamic response and the corresponding resonant frequencies are revealed. One implication of present findings lies in the fact that spatial variations may be useful in detecting the presence of a target by its signature in the distribution of diffuse energy. These results may be useful in assessing the seismic response of a given site if strong ground motions are scarce. It suffices having a reasonable illumination from micro earthquakes and noise. We consider that the imaginary part of Green function at the source is a spectral signature of the site. The relative importance of the peaks of this energy spectrum, ruling out non linear effects, may influence the seismic response for future earthquakes. Partial supports from DGAPA-UNAM, Project IN114706, Mexico; from Proyect MCyT CGL2005-05500-C02/BTE, Spain; from project DyETI of INSU-CNRS, France, and from the Instituto Mexicano del Petróleo are greatly appreciated.
Mobit, Paul; Badragan, Iulian
2006-01-01
EGSnrc Monte Carlo simulations were used to calculate the angular and radial dependence of the energy response factor for LiF-thermoluminescence dosemeters (TLDs) irradiated with a commercially available (125)I permanent brachytherapy source. The LiF-TLDs were modelled as cylindrical micro-rods of length 6 mm and with diameters of 1 mm and 5 mm. The results show that for a LiF-TLD micro-rod of 1 mm diameter, the energy response relative to (60)Co gamma rays is 1.406 +/- 0.3% for a polar angle of 90 degrees and radial distance of 1.0 cm. When the diameter of the micro-rod is increased from 1 to 5 mm, the energy response decreases to 1.32 +/- 0.3% at the same point. The variation with position of the energy response factor is not >5% in a 6 cm x 6 cm x 6 cm calculation grid for the 5 mm diameter micro-rod. The results show that there is a change in the photon spectrum with angle and radial distance, which causes the variation of the energy response.
Ardila-Rey, Jorge Alfredo; Rojas-Moreno, Mónica Victoria; Martínez-Tarifa, Juan Manuel; Robles, Guillermo
2014-02-19
Partial discharge (PD) detection is a standardized technique to qualify electrical insulation in machines and power cables. Several techniques that analyze the waveform of the pulses have been proposed to discriminate noise from PD activity. Among them, spectral power ratio representation shows great flexibility in the separation of the sources of PD. Mapping spectral power ratios in two-dimensional plots leads to clusters of points which group pulses with similar characteristics. The position in the map depends on the nature of the partial discharge, the setup and the frequency response of the sensors. If these clusters are clearly separated, the subsequent task of identifying the source of the discharge is straightforward so the distance between clusters can be a figure of merit to suggest the best option for PD recognition. In this paper, two inductive sensors with different frequency responses to pulsed signals, a high frequency current transformer and an inductive loop sensor, are analyzed to test their performance in detecting and separating the sources of partial discharges.
Kim, Jonathan J; Comstock, Jeff; Ryan, Peter; Heindel, Craig; Koenigsberger, Stephan
2016-11-01
In 2000, elevated nitrate concentrations ranging from 12 to 34mg/L NO3N were discovered in groundwater from numerous domestic bedrock wells adjacent to a large dairy farm in central Vermont. Long-term plots and contours of nitrate vs. time for bedrock wells showed "little/no", "moderate", and "large" change patterns that were spatially separable. The metasedimentary bedrock aquifer is strongly anisotropic and groundwater flow is controlled by fractures, bedding/foliation, and basins and ridges in the bedrock surface. Integration of the nitrate concentration vs. time data and the physical and chemical aquifer characterization suggest two nitrate sources: a point source emanating from a waste ravine and a non-point source that encompasses the surrounding fields. Once removed, the point source of NO3 (manure deposited in a ravine) was exhausted and NO3 dropped from 34mg/L to <10mg/L after ~10years; however, persistence of NO3 in the 3 to 8mg/L range (background) reflects the long term flux of nitrates from nutrients applied to the farm fields surrounding the ravine over the years predating and including this study. Inferred groundwater flow rates from the waste ravine to either moderate change wells in basin 2 or to the shallow bedrock zone beneath the large change wells are 0.05m/day, well within published bedrock aquifer flow rates. Enrichment of (15)N and (18)O in nitrate is consistent with lithotrophic denitrification of NO3 in the presence of dissolved Mn and Fe. Once the ravine point-source was removed, denitrification and dilution collectively were responsible for the down-gradient decrease of nitrate in this bedrock aquifer. Denitrification was most influential when NO3N was >10mg/L. Our multidisciplinary methods of aquifer characterization are applicable to groundwater contamination in any complexly-deformed and metamorphosed bedrock aquifer. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhang, Lei; Lu, Wenxi; An, Yonglei; Li, Di; Gong, Lei
2012-01-01
The impacts of climate change on streamflow and non-point source pollutant loads in the Shitoukoumen reservoir catchment are predicted by combining a general circulation model (HadCM3) with the Soil and Water Assessment Tool (SWAT) hydrological model. A statistical downscaling model was used to generate future local scenarios of meteorological variables such as temperature and precipitation. Then, the downscaled meteorological variables were used as input to the SWAT hydrological model calibrated and validated with observations, and the corresponding changes of future streamflow and non-point source pollutant loads in Shitoukoumen reservoir catchment were simulated and analyzed. Results show that daily temperature increases in three future periods (2010-2039, 2040-2069, and 2070-2099) relative to a baseline of 1961-1990, and the rate of increase is 0.63°C per decade. Annual precipitation also shows an apparent increase of 11 mm per decade. The calibration and validation results showed that the SWAT model was able to simulate well the streamflow and non-point source pollutant loads, with a coefficient of determination of 0.7 and a Nash-Sutcliffe efficiency of about 0.7 for both the calibration and validation periods. The future climate change has a significant impact on streamflow and non-point source pollutant loads. The annual streamflow shows a fluctuating upward trend from 2010 to 2099, with an increase rate of 1.1 m(3) s(-1) per decade, and a significant upward trend in summer, with an increase rate of 1.32 m(3) s(-1) per decade. The increase in summer contributes the most to the increase of annual load compared with other seasons. The annual NH (4) (+) -N load into Shitoukoumen reservoir shows a significant downward trend with a decrease rate of 40.6 t per decade. The annual TP load shows an insignificant increasing trend, and its change rate is 3.77 t per decade. The results of this analysis provide a scientific basis for effective support of decision makers and strategies of adaptation to climate change.
Gallium nitride light sources for optical coherence tomography
NASA Astrophysics Data System (ADS)
Goldberg, Graham R.; Ivanov, Pavlo; Ozaki, Nobuhiko; Childs, David T. D.; Groom, Kristian M.; Kennedy, Kenneth L.; Hogg, Richard A.
2017-02-01
The advent of optical coherence tomography (OCT) has permitted high-resolution, non-invasive, in vivo imaging of the eye, skin and other biological tissue. The axial resolution is limited by source bandwidth and central wavelength. With the growing demand for short wavelength imaging, super-continuum sources and non-linear fibre-based light sources have been demonstrated in tissue imaging applications exploiting the near-UV and visible spectrum. Whilst the potential has been identified of using gallium nitride devices due to relative maturity of laser technology, there have been limited reports on using such low cost, robust devices in imaging systems. A GaN super-luminescent light emitting diode (SLED) was first reported in 2009, using tilted facets to suppress lasing, with the focus since on high power, low speckle and relatively low bandwidth applications. In this paper we discuss a method of producing a GaN based broadband source, including a passive absorber to suppress lasing. The merits of this passive absorber are then discussed with regards to broad-bandwidth applications, rather than power applications. For the first time in GaN devices, the performance of the light sources developed are assessed though the point spread function (PSF) (which describes an imaging systems response to a point source), calculated from the emission spectra. We show a sub-7μm resolution is possible without the use of special epitaxial techniques, ultimately outlining the suitability of these short wavelength, broadband, GaN devices for use in OCT applications.
Widmer, Jocelyn M.; Weppelmann, Thomas A.; Alam, Meer T.; Morrissey, B. David; Redden, Edsel; Rashid, Mohammed H.; Diamond, Ulrica; Ali, Afsar; De Rochars, Madsen Beau; Blackburn, Jason K.; Johnson, Judith A.; Morris, J. Glenn
2014-01-01
We inventoried non-surface water sources in the Leogane and Gressier region of Haiti (approximately 270 km2) in 2012 and 2013 and screened water from 345 sites for fecal coliforms and Vibrio cholerae. An international organization/non-governmental organization responsible for construction could be identified for only 56% of water points evaluated. Sixteen percent of water points were non-functional at any given time; 37% had evidence of fecal contamination, with spatial clustering of contaminated sites. Among improved water sources (76% of sites), 24.6% had fecal coliforms versus 80.9% in unimproved sources. Fecal contamination levels increased significantly from 36% to 51% immediately after the passage of Tropical Storm Sandy in October of 2012, with a return to 34% contamination in March of 2013. Long-term sustainability of potable water delivery at a regional scale requires ongoing assessment of water quality, functionality, and development of community-based management schemes supported by a national plan for the management of potable water. PMID:25071005
Widmer, Jocelyn M; Weppelmann, Thomas A; Alam, Meer T; Morrissey, B David; Redden, Edsel; Rashid, Mohammed H; Diamond, Ulrica; Ali, Afsar; De Rochars, Madsen Beau; Blackburn, Jason K; Johnson, Judith A; Morris, J Glenn
2014-10-01
We inventoried non-surface water sources in the Leogane and Gressier region of Haiti (approximately 270 km(2)) in 2012 and 2013 and screened water from 345 sites for fecal coliforms and Vibrio cholerae. An international organization/non-governmental organization responsible for construction could be identified for only 56% of water points evaluated. Sixteen percent of water points were non-functional at any given time; 37% had evidence of fecal contamination, with spatial clustering of contaminated sites. Among improved water sources (76% of sites), 24.6% had fecal coliforms versus 80.9% in unimproved sources. Fecal contamination levels increased significantly from 36% to 51% immediately after the passage of Tropical Storm Sandy in October of 2012, with a return to 34% contamination in March of 2013. Long-term sustainability of potable water delivery at a regional scale requires ongoing assessment of water quality, functionality, and development of community-based management schemes supported by a national plan for the management of potable water. © The American Society of Tropical Medicine and Hygiene.
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
Tissue engineering and regenerative medicine as applied to the gastrointestinal tract.
Bitar, Khalil N; Zakhem, Elie
2013-10-01
The gastrointestinal (GI) tract is a complex system characterized by multiple cell types with a determined architectural arrangement. Tissue engineering of the GI tract aims to reinstate the architecture and function of all structural layers. The key point for successful tissue regeneration includes the use of cells/biomaterials that elucidate minimal immune response after implantation. Different biomaterial choices and cell sources have been proposed to engineer the GI tract. This review summarizes the recent advances in bioengineering the GI tract with emphasis on cell sources and scaffolding biomaterials. Copyright © 2013 Elsevier Ltd. All rights reserved.
Point-Source Contributions to the Water Quality of an Urban Stream
NASA Astrophysics Data System (ADS)
Little, S. F. B.; Young, M.; Lowry, C.
2014-12-01
Scajaquada Creek, which runs through the heart of the city of Buffalo, is a prime example of the ways in which human intervention and local geomorphology can impact water quality and urban hydrology. Beginning in the 1920's, the Creek has been partially channelized and connected to Buffalo's combined sewer system (CSS). At Forest Lawn Cemetery, where this study takes place, Scajaquada Creek emerges from a 3.5-mile tunnel built to route stream flow under the city. Collocated with the tunnel outlet is a discharge point for Buffalo's CSS, combined sewer outlet (CSO) #53. It is at this point that runoff and sanitary sewage discharge regularly during rain events. Initially, this study endeavored to create a spatial and temporal picture for this portion of the Creek, monitoring such parameters as conductivity, dissolved oxygen, pH, temperature, and turbidity, in addition to measuring Escherichia coli (E. coli) concentrations. As expected, these factors responded directly to seasonality, local geomorphology, and distance from the point source (CSO #53), displaying a overall, linear response. However, the addition of nitrate and phosphate testing to the study revealed an entirely separate signal from that previously observed. Concentrations of these parameters did not respond to location in the same manner as E. coli. Instead of decreasing with distance from the CSO, a distinct periodicity was observed, correlating with a series of outflow pipes lining the stream banks. It is hypothesized that nitrate and phosphate occurring in this stretch of Scajaquada Creek originate not from the CSO, but from fertilizers used to maintain the lawns within the subwatershed. These results provide evidence of the complexity related to water quality issues in urban streams as a result of point- and nonpoint-source hydrologic inputs.
Aerial Measuring System Sensor Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. S. Detwiler
2002-04-01
This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Current source density correlates of cerebellar Golgi and Purkinje cell responses to tactile input
Tahon, Koen; Wijnants, Mike; De Schutter, Erik
2011-01-01
The overall circuitry of the cerebellar cortex has been known for over a century, but the function of many synaptic connections remains poorly characterized in vivo. We used a one-dimensional multielectrode probe to estimate the current source density (CSD) of Crus IIa in response to perioral tactile stimuli in anesthetized rats and to correlate current sinks and sources to changes in the spike rate of corecorded Golgi and Purkinje cells. The punctate stimuli evoked two distinct early waves of excitation (at <10 and ∼20 ms) associated with current sinks in the granular layer. The second wave was putatively of corticopontine origin, and its associated sink was located higher in the granular layer than the first trigeminal sink. The distinctive patterns of granular-layer sinks correlated with the spike responses of corecorded Golgi cells. In general, Golgi cell spike responses could be linearly reconstructed from the CSD profile. A dip in simple-spike activity of coregistered Purkinje cells correlated with a current source deep in the molecular layer, probably generated by basket cell synapses, interspersed between sparse early sinks presumably generated by synapses from granule cells. The late (>30 ms) enhancement of simple-spike activity in Purkinje cells was characterized by the absence of simultaneous sinks in the granular layer and by the suppression of corecorded Golgi cell activity, pointing at inhibition of Golgi cells by Purkinje axon collaterals as a likely mechanism of late Purkinje cell excitation. PMID:21228303
Multivariate Probabilistic Analysis of an Hydrological Model
NASA Astrophysics Data System (ADS)
Franceschini, Samuela; Marani, Marco
2010-05-01
Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
Estimation of Phosphorus Emissions in the Upper Iguazu Basin (brazil) Using GIS and the More Model
NASA Astrophysics Data System (ADS)
Acosta Porras, E. A.; Kishi, R. T.; Fuchs, S.; Hilgert, S.
2016-06-01
Pollution emissions into the drainage basin have direct impact on surface water quality. These emissions result from human activities that turn into pollution loads when they reach the water bodies, as point or diffuse sources. Their pollution potential depends on the characteristics and quantity of the transported materials. The estimation of pollution loads can assist decision-making in basin management. Knowledge about the potential pollution sources allows for a prioritization of pollution control policies to achieve the desired water quality. Consequently, it helps avoiding problems such as eutrophication of water bodies. The focus of the research described in this study is related to phosphorus emissions into river basins. The study area is the upper Iguazu basin that lies in the northeast region of the State of Paraná, Brazil, covering about 2,965 km2 and around 4 million inhabitants live concentrated on just 16% of its area. The MoRE (Modeling of Regionalized Emissions) model was used to estimate phosphorus emissions. MoRE is a model that uses empirical approaches to model processes in analytical units, capable of using spatially distributed parameters, covering both, emissions from point sources as well as non-point sources. In order to model the processes, the basin was divided into 152 analytical units with an average size of 20 km2. Available data was organized in a GIS environment. Using e.g. layers of precipitation, the Digital Terrain Model from a 1:10000 scale map as well as soils and land cover, which were derived from remote sensing imagery. Further data is used, such as point pollution discharges and statistical socio-economic data. The model shows that one of the main pollution sources in the upper Iguazu basin is the domestic sewage that enters the river as point source (effluents of treatment stations) and/or as diffuse pollution, caused by failures of sanitary sewer systems or clandestine sewer discharges, accounting for about 56% of the emissions. Second significant shares of emissions come from direct runoff or groundwater, being responsible for 32% of the total emissions. Finally, agricultural erosion and industry pathways represent 12% of emissions. This study shows that MoRE is capable of producing valid emission calculation on a relatively reduced input data basis.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
Method and apparatus for calibrating a particle emissions monitor
Flower, W.L.; Renzi, R.F.
1998-07-07
The invention discloses a method and apparatus for calibrating particulate emissions monitors, in particular, sampling probes, and in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream. 6 figs.
Method and apparatus for calibrating a particle emissions monitor
Flower, William L.; Renzi, Ronald F.
1998-07-07
The instant invention discloses method and apparatus for calibrating particulate emissions monitors, in particular, and sampling probes, in general, without removing the instrument from the system being monitored. A source of one or more specific metals in aerosol (either solid or liquid) or vapor form is housed in the instrument. The calibration operation is initiated by moving a focusing lens, used to focus a light beam onto an analysis location and collect the output light response, from an operating position to a calibration position such that the focal point of the focusing lens is now within a calibration stream issuing from a calibration source. The output light response from the calibration stream can be compared to that derived from an analysis location in the operating position to more accurately monitor emissions within the emissions flow stream.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
... Protection Agency. ACTION: Notice of Final NPDES General Permit. SUMMARY: The Director of the Water Quality... Extraction Point Source Category as authorized by section 402 of the Clean Water Act, 33 U.S.C. 1342 (CWA... change to the proposed permit. A copy of the Region's responses to comments and the final permit may be...
A Novel Field-Deployable Point-of-Care Diagnostic Test for Cutaneous Leishmaniasis
2017-10-01
Leishmaniasis PRINCIPAL INVESTIGATOR: LT. Danett K. Bishop CONTRACTING ORGANIZATION: The Henry M. Jackson for the Advancement of Military Medicine Bethesda...21702-5012 DISTRIBUTION STATEMENT: Approved for Public Release; Distribution Unlimited The views, opinions and/or findings contained in this...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the
NASA Astrophysics Data System (ADS)
Drouin, Ariane; Michaud, Aubert; Thériault, Georges; Beaudin, Isabelle; Rodrigue, Jean-François; Denault, Jean-Thomas; Desjardins, Jacques; Côté, Noémi
2013-04-01
In Quebec / Canada, water quality improvement in rural areas greatly depends on the reduction of diffuse pollution. Indeed, point source pollution has been reduced significantly in Canada in recent years by creating circumscribed pits for manure and removing animals from stream. Diffuse pollution differs from point source pollution because it is spread over large areas. In agricultural areas, sediment loss by soil and riverbank erosion along with loss of nutrients (phosphorus, nitrogen, etc.) and pesticides from fields represent the main source of non-point source pollution. The factor mainly responsible for diffuse pollution in agricultural areas is surface runoff occurring in poorly drained areas in fields. The presence of these poorly drained areas is also one of the most limiting factors in crop productivity. Thus, a reconciliation of objectives at the farm (financial concern for farmers) and off-farm concerns (environmental concern) is possible. In short, drainage, runoff, erosion, water quality and crop production are all interconnected issues that need to be tackled together. Two complementary data sources are mainly used in the diagnosis of drainage, surface runoff and erosion : elevation data and multispectral satellite images. In this study of two watersheds located in Québec (Canada), LiDAR elevation data and satellite imagery (QuickBird, Spot and Landsat) were acquired. The studied territories have been partitioned in hydrologic response units (HRUs) according to sub-basins, soils, elevation (topographic index) and land use. These HRUs are afterwards used in a P index software (P-Edit) that calculates the quantities of sediments and phosphorus exported from each HRUs. These exports of sediments and phosphorus are validated with hydrometric and water quality data obtain in two sub-basins and are also compared to soil brightness index derived from multispectral images. This index is sensitive to soil moisture and thus highlights areas where the soil is more humid. A variety of other indices are used to explain the sediments yields. These indices, such as the average percentage of slope, the distance to the stream, the relative position in landscape, the position to the water table, etc. are mainly derived from high precision elevation data. All these data are used to locate critical source areas that generally correspond to a restraint part of the territory but account for the principal amount of sediments exports. Once the critical source areas are identified, best management practices (BMPs) (per example : contaminant source control practices, conservation cropping practices and surface runoff control structures) can be planned. This way, money and energy are used where it really counts. In this presentation, the complete methodology including LiDAR data processing will be explained. The results and the possibility to reproduce the developed method will be discussed.
Nelson, James K; Reuter-Lorenz, Patricia A; Sylvester, Ching-Yune C; Jonides, John; Smith, Edward E
2003-09-16
Cognitive control requires the resolution of interference among competing and potentially conflicting representations. Such conflict can emerge at different points between stimulus input and response generation, with the net effect being that of compromising performance. The goal of this article was to dissociate the neural mechanisms underlying different sources of conflict to elucidate the architecture of the neural systems that implement cognitive control. By using functional magnetic resonance imaging and a verbal working memory task (item recognition), we examined brain activity related to two kinds of conflict with comparable behavioral consequences. In a trial of our item-recognition task, participants saw four letters, followed by a retention interval, and a probe letter that did or did not match one of the letters held in working memory (positive probe and negative probe, respectively). On some trials, conflict arose solely because of the current negative probe having a high familiarity, due to its membership in the immediately preceding trial's target set. On other trials, additional conflict arose because of the current negative probe having also been a positive probe on the immediately preceding trial, producing response-level conflict. Consistent with previous work, conflict due to high familiarity was associated with left prefrontal activation, but not with anterior cingulate activation. The response-conflict condition, when compared with high-familiarity conflict trials, was associated with anterior cingulate cortex activation, but with no additional left prefrontal activation. This double dissociation points to differing contributions of specific cortical areas to cognitive control, which are based on the source of conflict.
Micromachined Thermoelectric Sensors and Arrays and Process for Producing
NASA Technical Reports Server (NTRS)
Foote, Marc C. (Inventor); Jones, Eric W. (Inventor); Caillat, Thierry (Inventor)
2000-01-01
Linear arrays with up to 63 micromachined thermopile infrared detectors on silicon substrates have been constructed and tested. Each detector consists of a suspended silicon nitride membrane with 11 thermocouples of sputtered Bi-Te and Bi-Sb-Te thermoelectric elements films. At room temperature and under vacuum these detectors exhibit response times of 99 ms, zero frequency D* values of 1.4 x 10(exp 9) cmHz(exp 1/2)/W and responsivity values of 1100 V/W when viewing a 1000 K blackbody source. The only measured source of noise above 20 mHz is Johnson noise from the detector resistance. These results represent the best performance reported to date for an array of thermopile detectors. The arrays are well suited for uncooled dispersive point spectrometers. In another embodiment, also with Bi-Te and Bi-Sb-Te thermoelectric materials on micromachined silicon nitride membranes, detector arrays have been produced with D* values as high as 2.2 x 10(exp 9) cm Hz(exp 1/2)/W for 83 ms response times.
NASA Technical Reports Server (NTRS)
Deloach, R.
1981-01-01
The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chofor, N; Poppe, B; Nebah, F
Purpose: In a brachytherapy photon field in water the fluence-averaged mean photon energy Em at the point of measurement correlates with the radiation quality correction factor kQ of a non water-equivalent detector. To support the experimental assessment of Em, we show that the normalized signal ratio NSR of a pair of radiation detectors, an unshielded silicon diode and a diamond detector can serve to measure quantity Em in a water phantom at a Ir-192 unit. Methods: Photon fluence spectra were computed in EGSnrc based on a detailed model of the GammaMed source. Factor kQ was calculated as the ratio ofmore » the detector's spectrum-weighted responses under calibration conditions at a 60Co unit and under brachytherapy conditions at various radial distances from the source. The NSR was investigated for a pair of a p-type unshielded silicon diode 60012 and a synthetic single crystal diamond detector 60019 (both PTW Freiburg). Each detector was positioned according to its effective point of measurement, with its axis facing the source. Lateral signal profiles were scanned under complete scatter conditions, and the NSR was determined as the quotient of the signal ratio under application conditions x and that at position r-ref = 1 cm. Results: The radiation quality correction factor kQ shows a close correlation with the mean photon energy Em. The NSR of the diode/diamond pair changes by a factor of two from 0–18 cm from the source, while Em drops from 350 to 150 keV. Theoretical and measured NSR profiles agree by ± 2 % for points within 5 cm from the source. Conclusion: In the presence of the close correlation between radiation quality correction factor kQ and photon mean energy Em, the NSR provides a practical means of assessing Em under clinical conditions. Precise detector positioning is the major challenge.« less
Vision in the dimmest habitats on earth.
Warrant, Eric
2004-10-01
A very large proportion of the world's animal species are active in dim light, either under the cover of night or in the depths of the sea. The worlds they see can be dim and extended, with light reaching the eyes from all directions at once, or they can be composed of bright point sources, like the multitudes of stars seen in a clear night sky or the rare sparks of bioluminescence that are visible in the deep sea. The eye designs of nocturnal and deep-sea animals have evolved in response to these two very different types of habitats, being optimised for maximum sensitivity to extended scenes, or to point sources, or to both. After describing the many visual adaptations that have evolved across the animal kingdom for maximising sensitivity to extended and point-source scenes, I then use case studies from the recent literature to show how these adaptations have endowed nocturnal animals with excellent vision. Nocturnal animals can see colour and negotiate dimly illuminated obstacles during flight. They can also navigate using learned terrestrial landmarks, the constellations of stars or the dim pattern of polarised light formed around the moon. The conclusion from these studies is clear: nocturnal habitats are just as rich in visual details as diurnal habitats are, and nocturnal animals have evolved visual systems capable of exploiting them. The same is certainly true of deep-sea animals, as future research will no doubt reveal.
Understanding the factors that effect maximal fat oxidation.
Purdom, Troy; Kravitz, Len; Dokladny, Karol; Mermier, Christine
2018-01-01
Lipids as a fuel source for energy supply during submaximal exercise originate from subcutaneous adipose tissue derived fatty acids (FA), intramuscular triacylglycerides (IMTG), cholesterol and dietary fat. These sources of fat contribute to fatty acid oxidation (FAox) in various ways. The regulation and utilization of FAs in a maximal capacity occur primarily at exercise intensities between 45 and 65% VO 2max , is known as maximal fat oxidation (MFO), and is measured in g/min. Fatty acid oxidation occurs during submaximal exercise intensities, but is also complimentary to carbohydrate oxidation (CHOox). Due to limitations within FA transport across the cell and mitochondrial membranes, FAox is limited at higher exercise intensities. The point at which FAox reaches maximum and begins to decline is referred to as the crossover point. Exercise intensities that exceed the crossover point (~65% VO 2max ) utilize CHO as the predominant fuel source for energy supply. Training status, exercise intensity, exercise duration, sex differences, and nutrition have all been shown to affect cellular expression responsible for FAox rate. Each stimulus affects the process of FAox differently, resulting in specific adaptions that influence endurance exercise performance. Endurance training, specifically long duration (>2 h) facilitate adaptations that alter both the origin of FAs and FAox rate. Additionally, the influence of sex and nutrition on FAox are discussed. Finally, the role of FAox in the improvement of performance during endurance training is discussed.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab
Cajigas, I.; Malik, W.Q.; Brown, E.N.
2012-01-01
Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419
Single-Point Attachment Wind Damper for Launch Vehicle On-Pad Motion
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.
2009-01-01
A single-point-attachment wind-damper device is proposed to reduce on-pad motion of a cylindrical launch vehicle. The device is uniquely designed to attach at only one location along the vehicle and capable of damping out wind gusts from any lateral direction. The only source of damping is from two viscous dampers in the device. The effectiveness of the damper design in reducing vehicle displacements is determined from transient analysis results using an Ares I-X launch vehicle. Combinations of different spring stiffnesses and damping are used to show how the vehicle's displacement response is significantly reduced during a wind gust.
NASA Astrophysics Data System (ADS)
Salançon, Evelyne; Degiovanni, Alain; Lapena, Laurent; Morin, Roger
2018-04-01
An event-counting method using a two-microchannel plate stack in a low-energy electron point projection microscope is implemented. 15 μm detector spatial resolution, i.e., the distance between first-neighbor microchannels, is demonstrated. This leads to a 7 times better microscope resolution. Compared to previous work with neutrons [Tremsin et al., Nucl. Instrum. Methods Phys. Res., Sect. A 592, 374 (2008)], the large number of detection events achieved with electrons shows that the local response of the detector is mainly governed by the angle between the hexagonal structures of the two microchannel plates. Using this method in point projection microscopy offers the prospect of working with a greater source-object distance (350 nm instead of 50 nm), advancing toward atomic resolution.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.11)
NASA Astrophysics Data System (ADS)
Long, A. J.
2014-09-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, springflow, groundwater level, solute transport, or cave drip for a measurement point in response to a system input of precipitation, recharge, or solute injection. The RRAWFLOW open-source code is written in the R language and is included in the Supplement to this article along with an example model of springflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution; i.e., the unit hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Other options include the use of user-defined IRFs and different methods to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications. RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
High-energy radiographic imaging performance of LYSO
Smalley, Duane; Duke, Dana; Webb, Timothy; ...
2018-05-23
Here, a comprehensive comparison of the dominant sources of radiation-induced blur for radiographic imaging system performance is made. End-point energies of 6, 10, 15, and 20 MeV bremsstrahlung photon radiation produced at the Los Alamos National Laboratory Microtron facility were used to examine the performance of large-panel cerium-doped lutetium yttrium silicon oxide (LYSO:Ce) scintillators 3, 5 and 10 mm thick. The system resolution was measured and compared between the various end-point energies and scintillator thicknesses. Contrary to expectations, it is found that there was only a minor dependence of system resolution on scintillator thickness or beam end-point energy. This indicatesmore » that increased scintillator thickness does not have a dramatic effect on system performance. The data are then compared to Geant4 simulations to assess contributions to the system performance through the examination of modulation transfer functions. It was determined that the low-frequency response of the system is dominated by the radiation-induced signal, while the higher-frequency response of the system is dominated by the optical imaging of the scintillation emission.« less
Coastlines of the past: clues for our future
NASA Astrophysics Data System (ADS)
Reynolds, L.
2017-12-01
Coastlines are constantly evolving due to the long-term effects of sea-level change and human impacts, as well as in response to high-impact, short duration hazard events such as storms, tsunamis, and earthquakes. The sediments that accumulate in coastal systems such as estuaries, dunes, and beaches archieve the enviornmental record of the past, providing us a baseline with which to predict future coastal hazard magnitude and recurrence intervals. We study this record to understand future hazard potential, as well as to aid restoration efforts. Many coastal systems around the world have been degraded in the last few hundred years by human activity- these regions are important breeding grounds for commercially viable species, natural pollution filters, and barriers against inundation. Efforts to restore coastal systems often rely on data from historical sources to reconstruct past coastal conditions-the geological record can extend the timeframe with which we think about possible restoration points. In addition, studying past coastal response to enviornmental changes can aid the effort to restore systems to a point of sustainability and resilience instead of simply restoring to an arbirtary point in time.
High-energy radiographic imaging performance of LYSO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smalley, Duane; Duke, Dana; Webb, Timothy
Here, a comprehensive comparison of the dominant sources of radiation-induced blur for radiographic imaging system performance is made. End-point energies of 6, 10, 15, and 20 MeV bremsstrahlung photon radiation produced at the Los Alamos National Laboratory Microtron facility were used to examine the performance of large-panel cerium-doped lutetium yttrium silicon oxide (LYSO:Ce) scintillators 3, 5 and 10 mm thick. The system resolution was measured and compared between the various end-point energies and scintillator thicknesses. Contrary to expectations, it is found that there was only a minor dependence of system resolution on scintillator thickness or beam end-point energy. This indicatesmore » that increased scintillator thickness does not have a dramatic effect on system performance. The data are then compared to Geant4 simulations to assess contributions to the system performance through the examination of modulation transfer functions. It was determined that the low-frequency response of the system is dominated by the radiation-induced signal, while the higher-frequency response of the system is dominated by the optical imaging of the scintillation emission.« less
High-energy neutrinos from FR0 radio galaxies?
NASA Astrophysics Data System (ADS)
Tavecchio, F.; Righi, C.; Capetti, A.; Grandi, P.; Ghisellini, G.
2018-04-01
The sources responsible for the emission of high-energy (≳100 TeV) neutrinos detected by IceCube are still unknown. Among the possible candidates, active galactic nuclei with relativistic jets are often examined, since the outflowing plasma seems to offer the ideal environment to accelerate the required parent high-energy cosmic rays. The non-detection of single-point sources or - almost equivalently - the absence, in the IceCube events, of multiplets originating from the same sky position - constrains the cosmic density and the neutrino output of these sources, pointing to a numerous population of faint sources. Here we explore the possibility that FR0 radio galaxies, the population of compact sources recently identified in large radio and optical surveys and representing the bulk of radio-loud AGN population, can represent suitable candidates for neutrino emission. Modelling the spectral energy distribution of an FR0 radio galaxy recently associated with a γ-ray source detected by the Large Area Telescope onboard Fermi, we derive the physical parameters of its jet, in particular the power carried by it. We consider the possible mechanisms of neutrino production, concluding that pγ reactions in the jet between protons and ambient radiation is too inefficient to sustain the required output. We propose an alternative scenario, in which protons, accelerated in the jet, escape from it and diffuse in the host galaxy, producing neutrinos as a result of pp scattering with the interstellar gas, in strict analogy with the processes taking place in star-forming galaxies.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
NASA Astrophysics Data System (ADS)
Kellerman, Adam; Makarevich, Roman; Spanswick, Emma; Donovan, Eric; Shprits, Yuri
2016-07-01
Energetic electrons in the 10's of keV range precipitate to the upper D- and lower E-region ionosphere, and are responsible for enhanced ionization. The same particles are important in the inner magnetosphere, as they provide a source of energy for waves, and thus relate to relativistic electron enhancements in Earth's radiation belts.In situ observations of plasma populations and waves are usually limited to a single point, which complicates temporal and spatial analysis. Also, the lifespan of satellite missions is often limited to several years which does not allow one to infer long-term climatology of particle precipitation, important for affecting ionospheric conditions at high latitudes. Multi-point remote sensing of the ionospheric plasma conditions can provide a global view of both ionospheric and magnetospheric conditions, and the coupling between magnetospheric and ionospheric phenomena can be examined on time-scales that allow comprehensive statistical analysis. In this study we utilize multi-point riometer measurements in conjunction with in situ satellite data, and physics-based modeling to investigate the spatio-temporal and energy-dependent response of riometer absorption. Quantifying this relationship may be a key to future advancements in our understanding of the complex D-region ionosphere, and may lead to enhanced specification of auroral precipitation both during individual events and over climatological time-scales.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J. W.
2012-04-01
Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.
Iran With Nuclear Weapons: Anticipating the Consequences for U.S. Policy
2008-09-01
per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing...and, at the same time , develop and/ or acquire more sophisticated defensive technologies to protect high-value aim-points, including nuclear weapons...efforts to shape the political agendas of its Persian Gulf neighbors at a time when allies or coalition partners might not agree about the nature and
Metabolic Response to Injury and Role of Anabolic Hormones
2007-01-01
hepatic steatosis . Even in the face of high carbohydrate feeding, far and away the greatest component of re-esterified triglyceride is from the...periphery rather than de-novo synthesized fatty acid [28]. Therefore, hepatic steatosis associated with injury is due to fat substrate cycling from the...dependent tissues are assured an energy source by increased hepatic gluconeogenesis and peripheral resistance to insulin. While this is beneficial, to a point
Velocity: Speed with Direction. The Professional Career of Gen Jerome F. O’Malley
2007-09-01
per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing...other US government agency. Cleared for public release: distribution unlimited. Muir S. Fairchild Research Information Center Cataloging Data Casey...www.west-point.org/users/ usma1983/40768/docs/taylor.html (accessed 24 April 2007). 16. Officer Effectiveness Report, unpublished data , 2 January 1956. 17
Portable Fourier Transform Spectroscopy for Analysis of Surface Contamination and Quality Control
NASA Technical Reports Server (NTRS)
Pugel, Diane
2012-01-01
Progress has been made into adapting and enhancing a commercially available infrared spectrometer for the development of a handheld device for in-field measurements of the chemical composition of various samples of materials. The intent is to duplicate the functionality of a benchtop Fourier transform infrared spectrometer (FTIR) within the compactness of a handheld instrument with significantly improved spectral responsivity. Existing commercial technology, like the deuterated L-alanine triglycine sulfide detectors (DLATGS), is capable of sensitive in-field chemical analysis. This proposed approach compares several subsystem elements of the FTIR inside of the commercial, non-benchtop system to the commercial benchtop systems. These subsystem elements are the detector, the preamplifier and associated electronics of the detector, the interferometer, associated readout parameters, and cooling. This effort will examine these different detector subsystem elements to look for limitations in each. These limitations will be explored collaboratively with the commercial provider, and will be prioritized to meet the deliverable objectives. The tool design will be that of a handheld gun containing the IR filament source and associated optics. It will operate in a point-and-shoot manner, pointing the source and optics at the sample under test and capturing the reflected response of the material in the same handheld gun. Data will be captured via the gun and ported to a laptop.
Photometric Calibration of Consumer Video Cameras
NASA Technical Reports Server (NTRS)
Suggs, Robert; Swift, Wesley, Jr.
2007-01-01
Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.
Seismoelectric imaging of shallow targets
Haines, S.S.; Pride, S.R.; Klemperer, S.L.; Biondi, B.
2007-01-01
We have undertaken a series of controlled field experiments to develop seismoelectric experimental methods for near-surface applications and to improve our understanding of seismoelectric phenomena. In a set of off-line geometry surveys (source separated from the receiver line), we place seismic sources and electrode array receivers on opposite sides of a man-made target (two sand-filled trenches) to record separately two previously documented seismoelectric modes: (1) the electromagnetic interface response signal created at the target and (2) the coseismic electric fields located within a compressional seismic wave. With the seismic source point in the center of a linear electrode array, we identify the previously undocumented seismoelectric direct field, and the Lorentz field of the metal hammer plate moving in the earth's magnetic field. We place the seismic source in the center of a circular array of electrodes (radial and circumferential orientations) to analyze the source-related direct and Lorentz fields and to establish that these fields can be understood in terms of simple analytical models. Using an off-line geometry, we create a multifold, 2D image of our trenches as dipping layers, and we also produce a complementary synthetic image through numerical modeling. These images demonstrate that off-line geometry (e.g., crosswell) surveys offer a particularly promising application of the seismoelectric method because they effectively separate the interface response signal from the (generally much stronger) coseismic and source-related fields. ?? 2007 Society of Exploration Geophysicists.
Lofgren, E.J.
1959-04-14
This patcnt relates to calutron devices and deals particularly with the mechanism used to produce the beam of ions wherein a charge material which is a vapor at room temperature is used. A charge container located outside the tank is connected through several conduits to various points along the arc chamber of the ion source. In addition, the rate of flow of the vapor to the arc chamber is controlled by a throttle valve in each conduit. By this arrangement the arc can be regulated accurately and without appreciable time lag, inasmuch as the rate of vapor flow is immediately responsive to the manipulation of the throttle valves.
DIRBE External Calibrator (DEC)
NASA Technical Reports Server (NTRS)
Wyatt, Clair L.; Thurgood, V. Alan; Allred, Glenn D.
1987-01-01
Under NASA Contract No. NAS5-28185, the Center for Space Engineering at Utah State University has produced a calibration instrument for the Diffuse Infrared Background Experiment (DIRBE). DIRBE is one of the instruments aboard the Cosmic Background Experiment Observatory (COBE). The calibration instrument is referred to as the DEC (Dirbe External Calibrator). DEC produces a steerable, infrared beam of controlled spectral content and intensity and with selectable point source or diffuse source characteristics, that can be directed into the DIRBE to map fields and determine response characteristics. This report discusses the design of the DEC instrument, its operation and characteristics, and provides an analysis of the systems capabilities and performance.
Reproducibility of Interferon Gamma (IFN-γ) Release Assays. A Systematic Review
Tagmouti, Saloua; Slater, Madeline; Benedetti, Andrea; Kik, Sandra V.; Banaei, Niaz; Cattamanchi, Adithya; Metcalfe, John; Dowdy, David; van Zyl Smit, Richard; Dendukuri, Nandini
2014-01-01
Rationale: Interferon gamma (IFN-γ) release assays for latent tuberculosis infection result in a larger-than-expected number of conversions and reversions in occupational screening programs, and reproducibility of test results is a concern. Objectives: Knowledge of the relative contribution and extent of the individual sources of variability (immunological, preanalytical, or analytical) could help optimize testing protocols. Methods: We performed a systematic review of studies published by October 2013 on all potential sources of variability of commercial IFN-γ release assays (QuantiFERON-TB Gold In-Tube and T-SPOT.TB). The included studies assessed test variability under identical conditions and under different conditions (the latter both overall and stratified by individual sources of variability). Linear mixed effects models were used to estimate within-subject SD. Measurements and Main Results: We identified a total of 26 articles, including 7 studies analyzing variability under the same conditions, 10 studies analyzing variability with repeat testing over time under different conditions, and 19 studies reporting individual sources of variability. Most data were on QuantiFERON (only three studies on T-SPOT.TB). A considerable number of conversions and reversions were seen around the manufacturer-recommended cut-point. The estimated range of variability of IFN-γ response in QuantiFERON under identical conditions was ±0.47 IU/ml (coefficient of variation, 13%) and ±0.26 IU/ml (30%) for individuals with an initial IFN-γ response in the borderline range (0.25–0.80 IU/ml). The estimated range of variability in noncontrolled settings was substantially larger (±1.4 IU/ml; 60%). Blood volume inoculated into QuantiFERON tubes and preanalytic delay were identified as key sources of variability. Conclusions: This systematic review shows substantial variability with repeat IFN-γ release assays testing even under identical conditions, suggesting that reversions and conversions around the existing cut-point should be interpreted with caution. PMID:25188809
Gun Testing Ballistics Issues for Insensitive Munitions Fragment Impact Testing
NASA Astrophysics Data System (ADS)
Baker, Ernest; Schultz, Emmanuel; NATO Munitions Safety Information Analysis Centre Team
2017-06-01
The STANAG 4496 Ed. 1 Fragment Impact, Munitions Test Procedure is normally conducted by gun launching a projectile for attack against a munition. The purpose of this test is to assess the reaction of a munition impacted by a fragment. The test specifies a standardized projectile (fragment) with a standard test velocity of 2530+/-90 m/s, or an alternate test velocity of 1830+/-60 m/s. The standard test velocity can be challenging to achieve and has several loosely defined and undefined characteristics that can affect the test item response. This publication documents the results of an international review of the STANAG 4496 related to the fragment impact test. To perform the review, MSIAC created a questionnaire in conjunction with the custodian of this STANAG and sent it to test centers. Fragment velocity variation, projectile tilt upon impact and aim point variation were identified as observed gun testing issues. Achieving 2530 m/s consistently and cost effectively can be challenging. The aim point of impact of the fragment is chosen with the objective of obtaining the most violent reaction. No tolerance for aim point is specified, although aim point variation can be a source for IM response variation. Fragment tilt on impact is also unspecified. The standard fragment fabricated from a variety of different steels which have a significant margin for mechanical properties. These, as well as other gun testing issues, have significant implications to resulting IM response.
Thermal actuation of extinguishing systems
NASA Astrophysics Data System (ADS)
Evans, D. D.
1984-03-01
A brief review of the Response Time Index (RTI) method of characterizing the thermal response of commercial sprinklers and heat detectors is presented. Measured ceiling layer flow temperature and velocity histories from a bedroom fire test are used to illustrate the use of RTI in calculating sprinkler operation times. In small enclosure fires, a quiescent warm gas layer confined by the room walls may accumulate below the ceiling before sprinkler operation. The effects of this warm gas layer on the fire plume and ceiling jet flows are accounted for by substitution of an equivalent point source fire. Encouraging agreement was found between measured ceiling jet temperatures from steady fires in a laboratory scale cylindrical enclosure put into dimensionless form based on parameters of the substitute fire source, and existing empirical correlations from fire tests in large enclosures in which a quiescent warm upper gas layer does not accumulate.
Children's Vantage Point of Recalling Traumatic Events.
Dawson, Katie S; Bryant, Richard A
2016-01-01
This study investigated the recollections of child survivors of the 2004 Asian tsunami in terms of their vantage point and posttraumatic stress disorder (PTSD) responses. Five years after the tsunami, 110 children (aged 7-13 years) living in Aceh, Indonesia were assessed for source of memories of the tsunami (personal memory or second-hand source), vantage point of the memory, and were administered the Children's Revised Impact of Event Scale-13. Fifty-three children (48%) met criteria for PTSD. Two-thirds of children reported direct memories of the tsunami and one-third reported having memories based on reports from other people. More children (97%) who reported an indirect memory of the tsunami recalled the event from an onlooker's perspective to some extent than those who recalled the event directly (63%). Boys were more likely to rely on stories from others to reconstruct their memory of the tsunami, and to adopt an observer perspective. Boys who adopted an observer's perspective had less severe PTSD than those who adopted a field perspective. These findings suggest that, at least in the case of boys, an observer perspectives of trauma can be associated with levels of PTSD.
Children’s Vantage Point of Recalling Traumatic Events
Dawson, Katie S.; Bryant, Richard A.
2016-01-01
This study investigated the recollections of child survivors of the 2004 Asian tsunami in terms of their vantage point and posttraumatic stress disorder (PTSD) responses. Five years after the tsunami, 110 children (aged 7–13 years) living in Aceh, Indonesia were assessed for source of memories of the tsunami (personal memory or second-hand source), vantage point of the memory, and were administered the Children’s Revised Impact of Event Scale-13. Fifty-three children (48%) met criteria for PTSD. Two-thirds of children reported direct memories of the tsunami and one-third reported having memories based on reports from other people. More children (97%) who reported an indirect memory of the tsunami recalled the event from an onlooker’s perspective to some extent than those who recalled the event directly (63%). Boys were more likely to rely on stories from others to reconstruct their memory of the tsunami, and to adopt an observer perspective. Boys who adopted an observer’s perspective had less severe PTSD than those who adopted a field perspective. These findings suggest that, at least in the case of boys, an observer perspectives of trauma can be associated with levels of PTSD. PMID:27649299
Ghost imaging with bucket detection and point detection
NASA Astrophysics Data System (ADS)
Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao
2018-04-01
We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID
Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.
2001-12-01
In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.
Perspectives on geopressured resources within the geothermal program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dibona, B.
1980-06-01
This work reviews the potential of geothermal energy in the U.S. Current sources of and uses for geothermal energy are described. The study outlines how geopressured resources fit into the geothermal program of the U.S. Department of Energy (DOE). Description of the program status includes progress in drilling and assessing geopressured resources. The Division of Geothermal Energy within DOE is responsible for geothermal resources comprising point heat sources (igneous); high heat flow regions such as those between the Sierras and the Rockies; radiogenic heat sources of moderate temperatures of the eastern U.S. coast; geopressured zones; and hot dry rock systems.more » Interest in these resources focuses on electric power production, direct heat application, and methane production from the geopressured aquifers.« less
Saucedo-García, Mariana; Gavilanes-Ruíz, Marina; Arce-Cervantes, Oscar
2015-01-01
Due to their sessile condition, plants have developed sensitive, fast, and effective ways to contend with environmental changes. These mechanisms operate as informational wires conforming extensive and intricate networks that are connected in several points. The responses are designed as pathways orchestrated by molecules that are transducers of protein and non-protein nature. Their chemical nature imposes selective features such as specificity, formation rate, and generation site to the informational routes. Enzymes such as mitogen-activated protein kinases and non-protein, smaller molecules, such as long-chain bases, phosphatidic acid, and reactive oxygen species are recurrent transducers in the pleiotropic responses to biotic and abiotic stresses in plants. In this review, we considered these four components as nodal points of converging signaling pathways that start from very diverse stimuli and evoke very different responses. These pleiotropic effects may be explained by the potentiality that every one of these four mediators can be expressed from different sources, cellular location, temporality, or magnitude. Here, we review recent advances in our understanding of the interplay of these four specific signaling components in Arabidopsis cells, with an emphasis on drought, cold and pathogen stresses. PMID:25763001
Reciprocity-based experimental determination of dynamic forces and moments: A feasibility study
NASA Technical Reports Server (NTRS)
Ver, Istvan L.; Howe, Michael S.
1994-01-01
BBN Systems and Technologies has been tasked by the Georgia Tech Research Center to carry Task Assignment No. 7 for the NASA Langley Research Center to explore the feasibility of 'In-Situ Experimental Evaluation of the Source Strength of Complex Vibration Sources Utilizing Reciprocity.' The task was carried out under NASA Contract No. NAS1-19061. In flight it is not feasible to connect the vibration sources to their mounting points on the fuselage through force gauges to measure dynamic forces and moments directly. However, it is possible to measure the interior sound field or vibration response caused by these structureborne sound sources at many locations and invoke principle of reciprocity to predict the dynamic forces and moments. The work carried out in the framework of Task 7 was directed to explore the feasibility of reciprocity-based measurements of vibration forces and moments.
NASA Astrophysics Data System (ADS)
Saracco, Ginette; Labazuy, Philippe; Moreau, Frédérique
2004-06-01
This study concerns the fluid flow circulation associated with magmatic intrusion during volcanic eruptions from electrical tomography studies. The objective is to localize and characterize the sources responsible for electrical disturbances during a time evolution survey between 1993 and 1999 of an active volcano, the Piton de la Fournaise. We have applied a dipolar probability tomography and a multi-scale analysis on synthetic and experimental SP data. We show the advantage of the complex continuous wavelet transform which allows to obtain directional information from the phase without a priori information on sources. In both cases, we point out a translation of potential sources through the upper depths during periods preceding a volcanic eruption around specific faults or structural features. The set of parameters obtained (vertical and horizontal localization, multipolar degree and inclination) could be taken into account as criteria to define volcanic precursors.
Guralnick, M J; Hammond, M A; Neville, B; Connor, R T
2008-12-01
In this longitudinal study, we examined the relationship between the sources and functions of social support and dimensions of child- and parent-related stress for mothers of young children with mild developmental delays. Sixty-three mothers completed assessments of stress and support at two time points. Multiple regression analyses revealed that parenting support during the early childhood period (i.e. advice on problems specific to their child and assistance with child care responsibilities), irrespective of source, consistently predicted most dimensions of parent stress assessed during the early elementary years and contributed unique variance. General support (i.e. primarily emotional support and validation) from various sources had other, less widespread effects on parental stress. The multidimensional perspective of the construct of social support that emerged suggested mechanisms mediating the relationship between support and stress and provided a framework for intervention.
NASA Astrophysics Data System (ADS)
Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.
2017-07-01
Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.
Alvarsson, Jonathan; Andersson, Claes; Spjuth, Ola; Larsson, Rolf; Wikberg, Jarl E S
2011-05-20
Compound profiling and drug screening generates large amounts of data and is generally based on microplate assays. Current information systems used for handling this are mainly commercial, closed source, expensive, and heavyweight and there is a need for a flexible lightweight open system for handling plate design, and validation and preparation of data. A Bioclipse plugin consisting of a client part and a relational database was constructed. A multiple-step plate layout point-and-click interface was implemented inside Bioclipse. The system contains a data validation step, where outliers can be removed, and finally a plate report with all relevant calculated data, including dose-response curves. Brunn is capable of handling the data from microplate assays. It can create dose-response curves and calculate IC50 values. Using a system of this sort facilitates work in the laboratory. Being able to reuse already constructed plates and plate layouts by starting out from an earlier step in the plate layout design process saves time and cuts down on error sources.
The Gravity Wave Response Above Deep Convection in a Squall Line Simulation
NASA Technical Reports Server (NTRS)
Alexander, M. J.; Holton, J. R.; Durran, D. R.
1995-01-01
High-frequency gravity waves generated by convective storms likely play an important role in the general circulation of the middle atmosphere. Yet little is known about waves from this source. This work utilizes a fully compressible, nonlinear, numerical, two-dimensional simulation of a midlatitude squall line to study vertically propagating waves generated by deep convection. The model includes a deep stratosphere layer with high enough resolution to characterize the wave motions at these altitudes. A spectral analysis of the stratospheric waves provides an understanding of the necessary characteristics of the spectrum for future studies of their effects on the middle atmosphere in realistic mean wind scenarios. The wave spectrum also displays specific characteristics that point to the physical mechanisms within the storm responsible for their forcing. Understanding these forcing mechanisms and the properties of the storm and atmosphere that control them are crucial first steps toward developing a parameterization of waves from this source. The simulation also provides a description of some observable signatures of convectively generated waves, which may promote observational verification of these results and help tie any such observations to their convective source.
Ardila-Rey, Jorge Alfredo; Rojas-Moreno, Mónica Victoria; Martínez-Tarifa, Juan Manuel; Robles, Guillermo
2014-01-01
Partial discharge (PD) detection is a standardized technique to qualify electrical insulation in machines and power cables. Several techniques that analyze the waveform of the pulses have been proposed to discriminate noise from PD activity. Among them, spectral power ratio representation shows great flexibility in the separation of the sources of PD. Mapping spectral power ratios in two-dimensional plots leads to clusters of points which group pulses with similar characteristics. The position in the map depends on the nature of the partial discharge, the setup and the frequency response of the sensors. If these clusters are clearly separated, the subsequent task of identifying the source of the discharge is straightforward so the distance between clusters can be a figure of merit to suggest the best option for PD recognition. In this paper, two inductive sensors with different frequency responses to pulsed signals, a high frequency current transformer and an inductive loop sensor, are analyzed to test their performance in detecting and separating the sources of partial discharges. PMID:24556674
In Search of the ‘New Informal Legitimacy’ of Médecins Sans Frontières
Calain, Philippe
2012-01-01
For medical humanitarian organizations, making their sources of legitimacy explicit is a useful exercise, in response to: misperceptions, concerns over the ‘humanitarian space’, controversies about specific humanitarian actions, challenges about resources allocation and moral suffering among humanitarian workers. This is also a difficult exercise, where normative criteria such as international law or humanitarian principles are often misrepresented as primary sources of legitimacy. This essay first argues for a morally principled definition of humanitarian medicine, based on the selfless intention of individual humanitarian actors. Taking Médecins Sans Frontières (MSF) as a case in point, a common source of moral legitimacy for medical humanitarian organizations is their cosmopolitan appeal to distributive justice and collective responsibility. More informally, their legitimacy is grounded in the rightfulness of specific actions and choices. This implies a constant commitment to publicity and accountability. Legitimacy is also generated by tangible support from the public to individual organizations, by commitments to professional integrity, and by academic alliances to support evidence-based practice and operational research. PMID:22442647
Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.
Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe
2010-01-01
There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.
Development of a low background test facility for the SPICA-SAFARI on-ground calibration
NASA Astrophysics Data System (ADS)
Dieleman, P.; Laauwen, W. M.; Ferrari, L.; Ferlet, M.; Vandenbussche, B.; Meinsma, L.; Huisman, R.
2012-09-01
SAFARI is a far-infrared camera to be launched in 2021 onboard the SPICA satellite. SAFARI offers imaging spectroscopy and imaging photometry in the wavelength range of 34 to 210 μm with detector NEP of 2•10-19 W/√Hz. A cryogenic test facility for SAFARI on-ground calibration and characterization is being developed. The main design driver is the required low background of a few attoWatts per pixel. This prohibits optical access to room temperature and hence all test equipment needs to be inside the cryostat at 4.5K. The instrument parameters to be verified are interfaces with the SPICA satellite, sensitivity, alignment, image quality, spectral response, frequency calibration, and point spread function. The instrument sensitivity is calibrated by a calibration source providing a spatially homogeneous signal at the attoWatt level. This low light intensity is achieved by geometrical dilution of a 150K source to an integrating sphere. The beam quality and point spread function is measured by a pinhole/mask plate wheel, back-illuminated by a second integrating sphere. This sphere is fed by a stable wide-band source, providing spectral lines via a cryogenic etalon.
Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment
NASA Astrophysics Data System (ADS)
Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.
2007-05-01
A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
Deaggregation of Probabilistic Ground Motions in the Central and Eastern United States
Harmsen, S.; Perkins, D.; Frankel, A.
1999-01-01
Probabilistic seismic hazard analysis (PSHA) is a technique for estimating the annual rate of exceedance of a specified ground motion at a site due to known and suspected earthquake sources. The relative contributions of the various sources to the total seismic hazard are determined as a function of their occurrence rates and their ground-motion potential. The separation of the exceedance contributions into bins whose base dimensions are magnitude and distance is called deaggregation. We have deaggregated the hazard analyses for the new USGS national probabilistic ground-motion hazard maps (Frankel et al., 1996). For points on a 0.2?? grid in the central and eastern United States (CEUS), we show color maps of the geographical variation of mean and modal magnitudes (M??, M??) and distances (D??, D??) for ground motions having a 2% chance of exceedance in 50 years. These maps are displayed for peak horizontal acceleration and for spectral response accelerations of 0.2, 0.3, and 1.0 sec. We tabulate M??, D??, M??, and D?? for 49 CEUS cities for 0.2- and 1.0-sec response. Thus, these maps and tables are PSHA-derived estimates of the potential earthquakes that dominate seismic hazard at short and intermediate periods in the CEUS. The contribution to hazard of the New Madrid and Charleston sources dominates over much of the CEUS; for 0.2-sec response, over 40% of the area; for 1.0-sec response, over 80% of the area. For 0.2-sec response, D?? ranges from 20 to 200 km, for 1.0 sec, 30 to 600 km. For sites influenced by New Madrid or Charleston, D is less than the distance to these sources, and M?? is less than the characteristic magnitude of these sources, because averaging takes into account the effect of smaller magnitude and closer sources. On the other hand, D?? is directly the distance to New Madrid or Charleston and M?? for 0.2- and 1.0-sec response corresponds to the dominating source over much of the CEUS. For some cities in the North Atlantic states, short-period seismic hazard is apt to be controlled by local seismicity, whereas intermediate period (1.0 sec) hazard is commonly controlled by regional seismicity, such as that of the Charlevoix seismic zone.
NASA Technical Reports Server (NTRS)
Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan
2014-01-01
The validation of in-orbit instrument performance requires stability in both instrument and calibration source. This paper describes a method of validation using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. Unlike internal calibrations, the Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, in-orbit observations have become standardized and compiled for the Flight Models-1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance parameters which can be gleaned are detector gain, pointing accuracy and static detector point response function validation. Lunar observations are used to examine the stability of all three detectors on each of these instruments from 2006 to present. This validation method has yielded results showing trends per CERES data channel of 1.2% per decade or less.
Solving Laplace equation to investigate the volcanic ground deformation pattern
NASA Astrophysics Data System (ADS)
Brahmi, Mouna; Castaldo, Raffaele; Barone, Andrea; Fedi, Maurizio; Tizzani, Pietro
2017-04-01
Volcanic eruptions are generally preceded by unrest phenomena, which are characterized by variations in the geophysical and geochemical state of the system. The most evident unrest parameters are the spatial and temporal topographic changes, which typically result in uplift or subsidence of the volcano edifice, usually caused by magma accumulation or hot fluid concentration in shallow reservoirs (Denasoquo et al., 2009). If the observed ground deformation phenomenon is very quick and the time evolution of the process shows a linear tendency, we can approximate the problem by using an elastic rheology model of the crust beneath the volcano. In this scenario, by considering the elastic field theory under the Boussinesq (1885) and Love (1892) approximations, we can evaluate the displacement field induced by a generic source in a homogeneous, elastic, half-space at an arbitrary point. To this purpose, we use the depth to extreme points (DEXP) method. By using this approach, we are able to estimate the depth and the geometry of the active source, responsible of the observed ground deformation.
A Robotic Fish to Emulate the Fast-Start
NASA Astrophysics Data System (ADS)
Currier, Todd; Ma, Ganzhong; Modarres-Sadeghi, Yahya
2017-11-01
An experimental study is conducted on a robotic fish designed to emulate the fast-start response. The fish body is constructed of 3D printed ribs and a light spring steel spine. The body is actuated using a series of pressured pistons. A total of four pistons are supplied with pressure through lightweight high pressure service lines. The source of pressure is carbon dioxide with a 700 psi peak operating pressure resulting in a body response that can cycle a c-start maneuver in milliseconds. The motion of the fish is precisely controlled through the use of solenoids with a control signal produced by a programmable microprocessor. The fish is constrained in all translational degrees of freedom but allowed to rotate about a vertical axis. The influence of the point of rotation is studied with different mounting points along the length of the head of the fish. The forces are measured in two perpendicular in-plane directions. A high speed camera is used to capture the response of the fish and the corresponding flow around it. Comparison is made with the kinematics observed in live fish.
GRAYSKY-A new gamma-ray skyshine code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witts, D.J.; Twardowski, T.; Watmough, M.H.
1993-01-01
This paper describes a new prototype gamma-ray skyshine code GRAYSKY (Gamma-RAY SKYshine) that has been developed at BNFL, as part of an industrially based master of science course, to overcome the problems encountered with SKYSHINEII and RANKERN. GRAYSKY is a point kernel code based on the use of a skyshine response function. The scattering within source or shield materials is accounted for by the use of buildup factors. This is an approximate method of solution but one that has been shown to produce results that are acceptable for dose rate predictions on operating plants. The novel features of GRAYSKY aremore » as follows: 1. The code is fully integrated with a semianalytical point kernel shielding code, currently under development at BNFL, which offers powerful solid-body modeling capabilities. 2. The geometry modeling also allows the skyshine response function to be used in a manner that accounts for the shielding of air-scattered radiation. 3. Skyshine buildup factors calculated using the skyshine response function have been used as well as dose buildup factors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
Early sensory encoding of affective prosody: neuromagnetic tomography of emotional category changes.
Thönnessen, Heike; Boers, Frank; Dammers, Jürgen; Chen, Yu-Han; Norra, Christine; Mathiak, Klaus
2010-03-01
In verbal communication, prosodic codes may be phylogenetically older than lexical ones. Little is known, however, about early, automatic encoding of emotional prosody. This study investigated the neuromagnetic analogue of mismatch negativity (MMN) as an index of early stimulus processing of emotional prosody using whole-head magnetoencephalography (MEG). We applied two different paradigms to study MMN; in addition to the traditional oddball paradigm, the so-called optimum design was adapted to emotion detection. In a sequence of randomly changing disyllabic pseudo-words produced by one male speaker in neutral intonation, a traditional oddball design with emotional deviants (10% happy and angry each) and an optimum design with emotional (17% happy and sad each) and nonemotional gender deviants (17% female) elicited the mismatch responses. The emotional category changes demonstrated early responses (<200 ms) at both auditory cortices with larger amplitudes at the right hemisphere. Responses to the nonemotional change from male to female voices emerged later ( approximately 300 ms). Source analysis pointed at bilateral auditory cortex sources without robust contribution from other such as frontal sources. Conceivably, both auditory cortices encode categorical representations of emotional prosodic. Processing of cognitive feature extraction and automatic emotion appraisal may overlap at this level enabling rapid attentional shifts to important social cues. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Use of speckle for determining the response characteristics of Doppler imaging radars
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1986-01-01
An optical model is developed for imaging optical radars such as the SAR on Seasat and the Shuttle Imaging Radar (SIR-B) by analyzing the Doppler shift of individual speckles in the image. The signal received at the spacecraft is treated in terms of a Fresnel-Kirchhoff integration over all backscattered radiation within a Huygen aperture at the earth. Account is taken of the movement of the spacecraft along the orbital path between emission and reception. The individual points are described by integration of the point source amplitude with a Green's function scattering kernel. Doppler data at each point furnishes the coordinates for visual representations. A Rayleigh-Poisson model of the surface scattering characteristics is used with Monte Carlo methods to generate simulations of Doppler radar speckle that compare well with Seasat SAR data SIR-B data.
In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Occurrence of Surface Water Contaminations: An Overview
NASA Astrophysics Data System (ADS)
Shahabudin, M. M.; Musa, S.
2018-04-01
Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.
Selection on worker honeybee responses to queen pheromone (Apis mellifera L.)
NASA Astrophysics Data System (ADS)
Pankiw, T.; Winston, Mark L.; Fondrk, M. Kim; Slessor, Keith N.
Disruptive selection for responsiveness to queen mandibular gland pheromone (QMP) in the retinue bioassay resulted in the production of high and low QMP responding strains of honeybees (Apis mellifera L.). Strains differed significantly in their retinue response to QMP after one generation of selection. By the third generation the high strain was on average at least nine times more responsive than the low strain. The strains showed seasonal phenotypic plasticity such that both strains were more responsive to the pheromone in the spring than in the fall. Directional selection for low seasonal variation indicated that phenotypic plasticity was an additional genetic component to retinue response to QMP. Selection for high and low retinue responsiveness to QMP was not an artifact of the synthetic blend because both strains were equally responsive or non-responsive to whole mandibular gland extracts compared with QMP. The use of these strains clearly pointed to an extra-mandibular source of retinue pheromones (Pankiw et al. 1995; Slessor et al. 1998; Keeling et al. 1999).
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
Identifying Enterprise Leverage Points in Defense Acquisition Program Performance
2009-09-01
estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining...the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this...of law, no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less
Elliott, P; Westlake, A J; Hills, M; Kleinschmidt, I; Rodrigues, L; McGale, P; Marshall, K; Rose, G
1992-01-01
STUDY OBJECTIVE--The Small Area Health Statistics Unit (SAHSU) was established at the London School of Hygiene and Tropical Medicine in response to a recommendation of the enquiry into the increased incidence of childhood leukaemia near Sellafield, the nuclear reprocessing plant in West Cumbria. The aim of this paper was to describe the Unit's methods for the investigation of health around point sources of environmental pollution in the United Kingdom. DESIGN--Routine data currently including deaths and cancer registrations are held in a large national database which uses a post code based retrieval system to locate cases geographically and link them to the underlying census enumeration districts, and hence to their populations at risk. Main outcome measures were comparison of observed/expected ratios (based on national rates) within bands delineated by concentric circles around point sources of environmental pollution located anywhere in Britain. MAIN RESULTS--The system is illustrated by a study of mortality from mesothelioma and asbestosis near the Plymouth naval dockyards during 1981-87. Within a 3 km radius of the docks the mortality rate for mesothelioma was higher than the national rate by a factor of 8.4, and that for asbestosis was higher by a factor of 13.6. CONCLUSIONS--SAHSU is a new national facility which is rapidly able to provide rates of mortality and cancer incidence for arbitrary circles drawn around any point in Britain. The example around Plymouth of mesothelioma and asbestosis demonstrates the ability of the system to detect an unusual excess of disease in a small locality, although in this case the findings are likely to be related to occupational rather than environmental exposure. PMID:1431704
Reiter, G F; Senesi, R; Mayers, J
2010-10-01
The measured changes in the zero-point kinetic energy of the protons are entirely responsible for the binding energy of water molecules to A phase DNA at the concentration of 6 water molecules/base pair. The changes in kinetic energy can be expected to be a significant contribution to the energy balance in intracellular biological processes and the properties of nano-confined water. The shape of the momentum distribution in the dehydrated A phase is consistent with coherent delocalization of some of the protons in a double well potential, with a separation of the wells of 0.2 Å.
Construction of a 1 MeV Electron Accelerator for High Precision Beta Decay Studies
NASA Astrophysics Data System (ADS)
Longfellow, Brenden
2014-09-01
Beta decay energy calibration for detectors is typically established using conversion sources. However, the calibration points from conversion sources are not evenly distributed over the beta energy spectrum and the foil backing of the conversion sources produces perturbations in the calibration spectrum. To improve this, an external, tunable electron beam coupled by a magnetic field can be used to calibrate the detector. The 1 MeV electron accelerator in development at Triangle Universities Nuclear Laboratory (TUNL) utilizes a pelletron charging system. The electron gun shoots 104 electrons per second with an energy range of 50 keV to 1 MeV and is pulsed at a 10 kHz rate with a few ns width. The magnetic field in the spectrometer is 1 T and guiding fields of 0.01 to 0.05 T for the electron gun are used to produce a range of pitch angles. This accelerator can be used to calibrate detectors evenly over its energy range and determine the detector response over a range of pitch angles. Beta decay energy calibration for detectors is typically established using conversion sources. However, the calibration points from conversion sources are not evenly distributed over the beta energy spectrum and the foil backing of the conversion sources produces perturbations in the calibration spectrum. To improve this, an external, tunable electron beam coupled by a magnetic field can be used to calibrate the detector. The 1 MeV electron accelerator in development at Triangle Universities Nuclear Laboratory (TUNL) utilizes a pelletron charging system. The electron gun shoots 104 electrons per second with an energy range of 50 keV to 1 MeV and is pulsed at a 10 kHz rate with a few ns width. The magnetic field in the spectrometer is 1 T and guiding fields of 0.01 to 0.05 T for the electron gun are used to produce a range of pitch angles. This accelerator can be used to calibrate detectors evenly over its energy range and determine the detector response over a range of pitch angles. TUNL REU Program.
Modeling tidal exchange and dispersion in Boston Harbor
Signell, Richard P.; Butman, Bradford
1992-01-01
Tidal dispersion and the horizontal exchange of water between Boston Harbor and the surrounding ocean are examined with a high-resolution (200 m) depth-averaged numerical model. The strongly varying bathymetry and coastline geometry of the harbor generate complex spatial patterns in the modeled tidal currents which are verified by shipboard acoustic Doppler surveys. Lagrangian exchange experiments demonstrate that tidal currents rapidly exchange and mix material near the inlets of the harbor due to asymmetry in the ebb/flood response. This tidal mixing zone extends roughly a tidal excursion from the inlets and plays an important role in the overall flushing of the harbor. Because the tides can only efficiently mix material in this limited region, however, harbor flushing must be considered a two step process: rapid exchange in the tidal mixing zone, followed by flushing of the tidal mixing zone by nontidal residual currents. Estimates of embayment flushing based on tidal calculations alone therefore can significantly overestimate the flushing time that would be expected under typical environmental conditions. Particle-release simulations from point sources also demonstrate that while the tides efficiently exchange material in the vicinity of the inlets, the exact nature of dispersion from point sources is extremely sensitive to the timing and location of the release, and the distribution of particles is streaky and patchlike. This suggests that high-resolution modeling of dispersion from point sources in these regions must be performed explicitly and cannot be parameterized as a plume with Gaussian-spreading in a larger scale flow field.
Gibbons-Hawking radiation of gravitons in the Poincaré and static patches of de Sitter spacetime
NASA Astrophysics Data System (ADS)
Bernar, Rafael P.; Crispino, Luís C. B.; Higuchi, Atsushi
2018-04-01
We discuss the quantization of linearized gravity in the background de Sitter spacetime using a gauge-invariant formalism to write the perturbed gravitational field in the static patch. This field is quantized after fixing the gauge completely. The response rate of this field to monochromatic multipole sources is then computed in the thermal equilibrium state with the well-known Gibbons-Hawking temperature. We compare this response rate with the one obtained in the Bunch-Davies-like vacuum state defined in the Poincaré patch. These response rates are found to be the same as expected. This agreement serves as a verification of the infrared finite graviton two-point function in the static patch of de Sitter spacetime found previously.
Conceptual design of a stray light facility for Earth observation satellites
NASA Astrophysics Data System (ADS)
Stockman, Y.; Hellin, M. L.; Marcotte, S.; Mazy, E.; Versluys, J.; François, M.; Taccola, M.; Zuccaro Marchi, A.
2017-11-01
With the upcoming of TMA or FMA (Three or Four Mirrors Anastigmat) telescope design in Earth Observation system, stray light is a major contributor to the degradation of the image quality. Numerous sources of stray light can be identified and theoretically evaluated. Nevertheless in order to build a stray light model of the instrument, the Point Spread Function(s) of the instrument, i.e., the flux response of the instrument to the flux received at the instrument entrance from an infinite distant point source needs to be determined. This paper presents a conceptual design of a facility placed in a vacuum chamber to eliminate undesired air particles scatter light sources. The specification of the clean room class or vacuum will depend on the required rejection to be measured. Once the vacuum chamber is closed, the stray light level from the external environment can be considered as negligible. Inside the chamber a dedicated baffle design is required to eliminate undesired light generated by the set up itself e.g. retro reflected light away from the instrument under test. This implies blackened shrouds all around the specimen. The proposed illumination system is a 400 mm off axis parabolic mirror with a focal length of 2 m. The off axis design suppresses the problem of stray light that can be generated by the internal obstruction. A dedicated block source is evaluated in order to avoid any stray light coming from the structure around the source pinhole. Dedicated attention is required on the selection of the source to achieve the required large measurement dynamic.
Point and Condensed Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian
2017-01-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
Improving Kepler Pipeline Sensitivity with Pixel Response Function Photometry.
NASA Astrophysics Data System (ADS)
Morris, Robert L.; Bryson, Steve; Jenkins, Jon Michael; Smith, Jeffrey C
2014-06-01
We present the results of our investigation into the feasibility and expected benefits of implementing PRF-fitting photometry in the Kepler Science Processing Pipeline. The Kepler Pixel Response Function (PRF) describes the expected system response to a point source at infinity and includes the effects of the optical point spread function, the CCD detector responsivity function, and spacecraft pointing jitter. Planet detection in the Kepler pipeline is currently based on simple aperture photometry (SAP), which is most effective when applied to uncrowded bright stars. Its effectiveness diminishes rapidly as target brightness decreases relative to the effects of noise sources such as detector electronics, background stars, and image motion. In contrast, PRF photometry is based on fitting an explicit model of image formation to the data and naturally accounts for image motion and contributions of background stars. The key to obtaining high-quality photometry from PRF fitting is a high-quality model of the system's PRF, while the key to efficiently processing the large number of Kepler targets is an accurate catalog and accurate mapping of celestial coordinates onto the focal plane. If the CCD coordinates of stellar centroids are known a priori then the problem of PRF fitting becomes linear. A model of the Kepler PRF was constructed at the time of spacecraft commissioning by fitting piecewise polynomial surfaces to data from dithered full frame images. While this model accurately captured the initial state of the system, the PRF has evolved dynamically since then and has been seen to deviate significantly from the initial (static) model. We construct a dynamic PRF model which is then used to recover photometry for all targets of interest. Both simulation tests and results from Kepler flight data demonstrate the effectiveness of our approach. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science Mission Directorate.Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA’s Science Mission Directorate.
NASA Astrophysics Data System (ADS)
Ali, Sk. Saiyad; Bharadwaj, Somnath; Choudhuri, Samir; Ghosh, Abhik; Roy, Nirupam
2016-12-01
The Diffuse Galactic Syncrotron Emission (DGSE) is the most important diffuse foreground component for future cosmological 21-cm observations. The DGSE is also an important probe of the cosmic ray electron and magnetic field distributions in the turbulent interstellar medium (ISM) of our galaxy. In this paper we briefly review the Tapered Gridded Estimator (TGE) which can be used to quantify the angular power spectrum C ℓ of the sky signal directly from the visibilities measured in radio-interferometric observations. The salient features of the TGE are: (1) it deals with the gridded data which makes it computationally very fast, (2) it avoids a positive noise bias which normally arises from the system noise inherent to the visibility data, and (3) it allows us to taper the sky response and thereby suppresses the contribution from unsubtracted point sources in the outer parts and the side lobes of the antenna beam pattern. We also summarize earlier work where the TGE was used to measure the C ℓ of the DGSE using 150 MHz GMRT data. Earlier measurements of C ℓ are restricted to ℓ ≤ ℓ _{max } ˜ 103 for the DGSE, the signal at the larger ℓ values is dominated by the residual point sources after source subtraction. The higher sensitivity of the upcoming SKA1 Low will allow the point sources to be subtracted to a fainter level than possible with existing telescopes. We predict that it will be possible to measure the C ℓ of the DGSE to larger values of ℓ _{max } with SKA1 Low. Our results show that it should be possible to achieve ℓ _{max }˜ 104 and ˜105 with 2 minutes and 10 hours of observations respectively.
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.
2002-01-01
The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.15)
Long, Andrew J.
2015-01-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, spring flow, groundwater level, or solute transport for a measurement point in response to a system input of precipitation, recharge, or solute injection. I introduce the first version of RRAWFLOW available for download and public use and describe additional options. The open-source code is written in the R language and is available at http://sd.water.usgs.gov/projects/RRAWFLOW/RRAWFLOW.html along with an example model of streamflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution, i.e., the unit-hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Several options are included to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications (e.g., estimating missing periods in a hydrologic record). RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Estimating the vibration level of an L-shaped beam using power flow techniques
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.
1986-01-01
The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.
NASA Technical Reports Server (NTRS)
Fares, Nabil; Li, Victor C.
1986-01-01
An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subcategory of direct discharge point sources that do not use end-of-pipe biological treatment. 414.100... AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Do Not Use End-of-Pipe Biological Treatment § 414.100 Applicability; description of the subcategory of...
Better Assessment Science Integrating Point and Non-point Sources (BASINS)
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.
Stamer, J.K.; Cherry, R.N.; Faye, R.E.; Kleckner, R.L.
1978-01-01
On an average annual basis and during the storm period of March 12-15, 1976, nonpoint-source loads for most constituents were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 miles downstream from Atlanta, GA. Most of the nonpoint-source constituent loads in the Atlanta to Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads and about 70 percent of the dissolved phosphorus loads at Whitesburg. During a low-flow period, June 1-2, 1977, five municipal point-sources contributed 63 percent of the ultimate biochemical oxygen demand, and 97 percent of the ammonium nitrogen loads at the Franklin station, at the upstream end of West Point Lake. Dissolved-oxygen concentrations of 4.1 to 5.0 milligrams per liter occurred in a 22-mile reach of the river downstream from Atlanta due about equally to nitrogenous and carbonaceous oxygen demands. The heat load from two thermoelectric powerplants caused a decrease in dissolved-oxygen concentration of about 0.2 milligrams per liter. Phytoplankton concentrations in West Point Lake, about 70 miles downstream from Atlanta, could exceed three million cells per millimeter during extended low-flow periods in the summer with present point-source phosphorus loads. (Woodard-USGS)
Unidentified point sources in the IRAS minisurvey
NASA Technical Reports Server (NTRS)
Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.
1984-01-01
Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.
The effect of directivity in a PSHA framework
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2012-09-01
We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
Mercury Sources and Fate in the Gulf of Maine
Sunderland, Elsie M.; Amirbahman, Aria; Burgess, Neil M.; Dalziel, John; Harding, Gareth; Jones, Stephen H.; Kamai, Elizabeth; Karagas, Margaret R.; Shi, Xun; Chen, Celia Y.
2012-01-01
Most human exposure to mercury (Hg) in the United States is from consuming marine fish and shellfish. The Gulf of Maine is a complex marine ecosystem comprised of twelve physioregions, including the Bay of Fundy, coastal shelf areas and deeper basins that contain highly productive fishing grounds. Here we review available data on spatial and temporal Hg trends to better understand the drivers of human and biological exposures. Atmospheric Hg deposition from U.S. and Canadian sources has declined since the mid-1990s in concert with emissions reductions but deposition from global sources has increased. Oceanographic circulation is the dominant source of total Hg inputs to the entire Gulf of Maine region (59%), followed by atmospheric deposition (28%), wastewater/industrial sources (8%), and rivers (5%). Resuspension of sediments increases MeHg inputs to overlying waters raising concerns about benthic trawling activities in shelf regions. In the near coastal areas, elevated sediment and mussel Hg levels are co-located in urban embayments and near large historical point sources. Temporal patterns in sentinel species (mussels and birds) have in some cases declined in response to localized point source mercury reductions but overall Hg trends do not show consistent declines. For example, levels of Hg have either declined or remained stable in eggs from four seabird species collected in the Bay of Fundy since 1972. Quantitatively linking Hg exposures from fish harvested from the Gulf of Maine to human health risks is challenging at this time because no data are available on the geographic origin of seafood consumed by coastal residents. In addition, there is virtually no information on Hg levels in commercial species for offshore regions of the Gulf of Maine where some of the most productive fisheries are located. Both of these data gaps should be priorities for future research. PMID:22572623
NASA Astrophysics Data System (ADS)
Kelley, M. C.; Dao, E. V.
2018-05-01
With the increase in solar activity, the Communication/Outage Forecast System satellite decayed on orbit to below the F peak. As such, we can study the development of convective ionospheric storms and, most importantly, study large-scale seeding of the responsible instability. For decades, gravity has been suggested as being responsible for the long wavelengths in the range of 200 to 1,000 km, as are commonly observed using airglow and satellite data. Here we suggest that convective thunderstorms are a likely source of gravity waves and point out that recent theoretical analysis has shown this connection to be quite possible.
The NBS scale of radiance temperature
NASA Technical Reports Server (NTRS)
Waters, William R.; Walker, James H.; Hattenburg, Albert T.
1988-01-01
The measurement methods and instrumentation used in the realization and transfer of the International Practical Temperature Scale (IPTS-68) above the temperature of freezing gold are described. The determination of the ratios of spectral radiance of tungsten-strip lamps to a gold-point blackbody at a wavelength of 654.6 nm is detailed. The response linearity, spectral responsivity, scattering error, and polarization properties of the instrumentation are described. The analysis of the sources of error and estimates of uncertainty are presented. The assigned uncertainties (three standard deviations) in radiance temperature range from + or - 2 K at 2573 K to + or - 0.5 K at 1073 K.
Appraisal of an Array TEM Method in Detecting a Mined-Out Area Beneath a Conductive Layer
NASA Astrophysics Data System (ADS)
Li, Hai; Xue, Guo-qiang; Zhou, Nan-nan; Chen, Wei-ying
2015-10-01
The transient electromagnetic method has been extensively used for the detection of mined-out area in China for the past few years. In the cases that the mined-out area is overlain by a conductive layer, the detection of the target layer is difficult with a traditional loop source TEM method. In order to detect the target layer in this condition, this paper presents a newly developed array TEM method, which uses a grounded wire source. The underground current density distribution and the responses of the grounded wire source TEM configuration are modeled to demonstrate that the target layer is detectable in this condition. The 1D OCCAM inversion routine is applied to the synthetic single station data and common middle point gather. The result reveals that the electric source TEM method is capable of recovering the resistive target layer beneath the conductive overburden. By contrast, the conductive target layer cannot be recovered unless the distance between the target layer and the conductive overburden is large. Compared with inversion result of the single station data, the inversion of common middle point gather can better recover the resistivity of the target layer. Finally, a case study illustrates that the array TEM method is successfully applied in recovering a water-filled mined-out area beneath a conductive overburden.
Multiband super-resolution imaging of graded-index photonic crystal flat lens
NASA Astrophysics Data System (ADS)
Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun
2018-05-01
Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.
Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen
2016-06-01
We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.
Very Luminous X-ray Point Sources in Starburst Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.
Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.
Origin of the pulse-like signature of shallow long-period volcano seismicity
Chouet, Bernard A.; Dawson, Phillip B.
2016-01-01
Short-duration, pulse-like long-period (LP) events are a characteristic type of seismicity accompanying eruptive activity at Mount Etna in Italy in 2004 and 2008 and at Turrialba Volcano in Costa Rica and Ubinas Volcano in Peru in 2009. We use the discrete wave number method to compute the free surface response in the near field of a rectangular tensile crack embedded in a homogeneous elastic half space and to gain insights into the origin of the LP pulses. Two source models are considered, including (1) a vertical fluid-driven crack and (2) a unilateral tensile rupture growing at a fixed sub-Rayleigh velocity with constant opening on a vertical crack. We apply cross correlation to the synthetics and data to demonstrate that a fluid-driven crack provides a natural explanation for these data with realistic source sizes and fluid properties. Our modeling points to shallow sources (<1 km depth), whose signatures are representative of the Rayleigh pulse sampled at epicentral distances >∼1 km. While a slow-rupture failure provides another potential model for these events, the synthetics and resulting fits to the data are not optimal in this model compared to a fluid-driven source. We infer that pulse-like LP signatures are parts of the continuum of responses produced by shallow fluid-driven sources in volcanoes.
The resolution of point sources of light as analyzed by quantum detection theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.
A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Miller, M. Coleman
The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less
NASA Technical Reports Server (NTRS)
Hill, Geoffrey A.; Olson, Erik D.
2004-01-01
Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.
NASA Astrophysics Data System (ADS)
Gatto, A.; Parolari, P.; Boffi, P.
2018-05-01
Frequency division multiplexing (FDM) is attractive to achieve high capacities in multiple access networks characterized by direct modulation and direct detection. In this paper we take into account point-to-point intra- and inter-datacenter connections to understand the performance of FDM operation compared with the ones achievable with standard multiple carrier modulation approach based on discrete multitone (DMT). DMT and FDM allow to match the non-uniform and bandwidth-limited response of the system under test, associated with the employment of low-cost directly-modulated sources, such as VCSELs with high-frequency chirp, and with fibre-propagation in presence of chromatic dispersion. While for very short distances typical of intra-datacentre communications, the huge number of DMT subcarriers permits to increase the transported capacity with respect to the FDM employment, in case of few tens-km reaches typical of inter-datacentre connections, the capabilities of FDM are more evident, providing system performance similar to the case of DMT application.
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
Response Rates and Response Bias for 50 Surveys of Pediatricians
Cull, William L; O'Connor, Karen G; Sharp, Sanford; Tang, Suk-fong S
2005-01-01
Research Objective To track response rates across time for surveys of pediatricians, to explore whether response bias is present for these surveys, and to examine whether response bias increases with lower response rates. Data Source/Study Setting A total of 63,473 cases were gathered from 50 different surveys of pediatricians conducted by the American Academy of Pediatrics (AAP) since 1994. Thirty-one surveys targeted active U.S. members of the AAP, six targeted pediatric residents, and the remaining 13 targeted AAP-member and nonmember pediatric subspecialists. Information for the full target samples, including nonrespondents, was collected using administrative databases of the AAP and the American Board of Pediatrics. Study Design To assess bias for each survey, age, gender, location, and AAP membership type were compared for respondents and the full target sample. Correlational analyses were conducted to examine whether surveys with lower response rates had increasing levels of response bias. Principal Findings Response rates to the 50 surveys examined declined significantly across survey years (1994–2002). Response rates ranged from 52 to 81 percent with an average of 68 percent. Comparisons between respondents and the full target samples showed the respondent group to be younger, to have more females, and to have less specialty-fellow members. Response bias was not apparent for pediatricians' geographical location. The average response bias, however, was fairly small for all factors: age (0.45 years younger), gender (1.4 percentage points more females), and membership type (1.1 percentage points fewer specialty-fellow members). Gender response bias was found to be inversely associated with survey response rates (r=−0.38). Even for the surveys with the lowest response rates, amount of response bias never exceeded 5 percentage points for gender, 3 years for age, or 3 percent for membership type. Conclusions While response biases favoring women, young physicians, and nonspecialty-fellow members were found across the 52–81 percent response rates examined in this study, the amount of bias was minimal for these factors that could be tested. At least for surveys of pediatricians, more attention should be devoted by investigators to assessments of response bias rather than relying on response rates as a proxy of response bias. PMID:15663710
Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...
Observations of Intermediate-mass Black Holes and Ultra-Luminous X-ray sources
NASA Astrophysics Data System (ADS)
Colbert, E. J. M.
2003-12-01
I will review various observations that suggest that intermediate-mass black holes (IMBHs) with masses ˜102-104 M⊙ exist in our Universe. I will also discuss some of the limitations of these observations. HST Observations of excess dark mass in globular cluster cores suggest IMBHs may be responsible, and some mass estimates from lensing experiments are nearly in the IMBH range. The intriguing Ultra-Luminous X-ray sources (ULXs, or IXOs) are off-nuclear X-ray point sources with X-ray luminosities LX ≳ 1039 erg s-1. ULXs are typically rare (1 in every 5 galaxies), and the nature of their ultra-luminous emission is currently debated. I will discuss the evidence for IMBHs in some ULXs, and briefly outline some phenomenology. Finally, I will discuss future observations that can be made to search for IMBHs.
NASA Astrophysics Data System (ADS)
Lee, Roh Pin
2016-04-01
Misconceptions and biases in energy perception could influence people's support for developments integral to the success of restructuring a nation's energy system. Science education, in equipping young adults with the cognitive skills and knowledge necessary to navigate in the confusing energy environment, could play a key role in paving the way for informed decision-making. This study examined German students' knowledge of the contribution of diverse energy sources to their nation's energy mix as well as their affective energy responses so as to identify implications for science education. Specifically, the study investigated whether and to what extent students hold mistaken beliefs about the role of multiple energy sources in their nation's energy mix, and assessed how misconceptions could act as self-generated reference points to underpin support/resistance of proposed developments. An in-depth analysis of spontaneous affective associations with five key energy sources also enabled the identification of underlying concerns driving people's energy responses and facilitated an examination of how affective perception, in acting as a heuristic, could lead to biases in energy judgment and decision-making. Finally, subgroup analysis differentiated by education and gender supported insights into a 'two culture' effect on energy perception and the challenge it poses to science education.
Qualitative modeling of silica plasma etching using neural network
NASA Astrophysics Data System (ADS)
Kim, Byungwhan; Kwon, Kwang Ho
2003-01-01
An etching of silica thin film is qualitatively modeled by using a neural network. The process was characterized by a 23 full factorial experiment plus one center point, in which the experimental factors and ranges include 100-800 W radio-frequency source power, 100-400 W bias power and gas flow rate ratio CHF3/CF4. The gas flow rate ratio varied from 0.2 to 5.0. The backpropagation neural network (BPNN) was trained on nine experiments and tested on six experiments, not pertaining to the original training data. The prediction ability of the BPNN was optimized as a function of the training parameters. Prediction errors are 180 Å/min and 1.33, for the etch rate and anisotropy models, respectively. Physical etch mechanisms were estimated from the three-dimensional plots generated from the optimized models. Predicted response surfaces were consistent with experimentally measured etch data. The dc bias was correlated to the etch responses to evaluate its contribution. Both the source power (plasma density) and bias power (ion directionality) strongly affected the etch rate. The source power was the most influential factor for the etch rate. A conflicting effect between the source and bias powers was noticed with respect to the anisotropy. The dc bias played an important role in understanding or separating physical etch mechanisms.
Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve
2006-11-01
A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1
Code of Federal Regulations, 2010 CFR
2010-07-01
... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological Treatment 4 Table 4... Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological...
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua
2013-02-01
Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.
Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui
2013-11-01
With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.
Scintillation detector efficiencies for neutrons in the energy region above 20 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1991-01-01
The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data havemore » been made. There is generally overall good agreement (less than 10% differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e. yield versus response pulse height, are generally within about 5% on the average for incident neutron energies between 16 and 50 MeV and for the upper 70% of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon. 32 refs., 6 figs., 2 tabs.« less
Scintillation detector efficiencies for neutrons in the energy region above 20 MeV
NASA Astrophysics Data System (ADS)
Dickens, J. K.
The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data were made. There is generally overall good agreement (less than 10 pct. differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e., yield versus response pulse height, are generally within about 5 pct. on the average for incident neutron energies between 16 and 50 MeV and for the upper 70 pct. of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon.
Hernández, Klaudia L; Yannicelli, Beatriz; Olsen, Lasse M; Dorador, Cristina; Menschel, Eduardo J; Molina, Verónica; Remonsellez, Francisco; Hengst, Martha B; Jeffrey, Wade H
2016-01-01
In high altitude environments, extreme levels of solar radiation and important differences of ionic concentrations over narrow spatial scales may modulate microbial activity. In Salar de Huasco, a high-altitude wetland in the Andean mountains, the high diversity of microbial communities has been characterized and associated with strong environmental variability. Communities that differed in light history and environmental conditions, such as nutrient concentrations and salinity from different spatial locations, were assessed for bacterial secondary production (BSP, 3 H-leucine incorporation) response from short-term exposures to solar radiation. We sampled during austral spring seven stations categorized as: (a) source stations, with recently emerged groundwater (no-previous solar exposure); (b) stream running water stations; (c) stations connected to source waters but far downstream from source points; and (d) isolated ponds disconnected from ground sources or streams with a longer isolation and solar exposure history. Very high values of 0.25 μE m -2 s -1 , 72 W m -2 and 12 W m -2 were measured for PAR, UVA, and UVB incident solar radiation, respectively. The environmental factors measured formed two groups of stations reflected by principal component analyses (near to groundwater sources and isolated systems) where isolated ponds had the highest BSP and microbial abundance (35 microalgae taxa, picoeukaryotes, nanoflagellates, and bacteria) plus higher salinities and PO 4 3- concentrations. BSP short-term response (4 h) to solar radiation was measured by 3 H-leucine incorporation under four different solar conditions: full sun, no UVB, PAR, and dark. Microbial communities established in waters with the longest surface exposure (e.g., isolated ponds) had the lowest BSP response to solar radiation treatments, and thus were likely best adapted to solar radiation exposure contrary to ground source waters. These results support our light history (solar exposure) hypothesis where the more isolated the community is from ground water sources, the better adapted it is to solar radiation. We suggest that factors other than solar radiation (e.g., salinity, PO 4 3- , NO 3 - ) are also important in determining microbial productivity in heterogeneous environments such as the Salar de Huasco.
Hernández, Klaudia L.; Yannicelli, Beatriz; Olsen, Lasse M.; Dorador, Cristina; Menschel, Eduardo J.; Molina, Verónica; Remonsellez, Francisco; Hengst, Martha B.; Jeffrey, Wade H.
2016-01-01
In high altitude environments, extreme levels of solar radiation and important differences of ionic concentrations over narrow spatial scales may modulate microbial activity. In Salar de Huasco, a high-altitude wetland in the Andean mountains, the high diversity of microbial communities has been characterized and associated with strong environmental variability. Communities that differed in light history and environmental conditions, such as nutrient concentrations and salinity from different spatial locations, were assessed for bacterial secondary production (BSP, 3H-leucine incorporation) response from short-term exposures to solar radiation. We sampled during austral spring seven stations categorized as: (a) source stations, with recently emerged groundwater (no-previous solar exposure); (b) stream running water stations; (c) stations connected to source waters but far downstream from source points; and (d) isolated ponds disconnected from ground sources or streams with a longer isolation and solar exposure history. Very high values of 0.25 μE m-2 s-1, 72 W m-2 and 12 W m-2 were measured for PAR, UVA, and UVB incident solar radiation, respectively. The environmental factors measured formed two groups of stations reflected by principal component analyses (near to groundwater sources and isolated systems) where isolated ponds had the highest BSP and microbial abundance (35 microalgae taxa, picoeukaryotes, nanoflagellates, and bacteria) plus higher salinities and PO43- concentrations. BSP short-term response (4 h) to solar radiation was measured by 3H-leucine incorporation under four different solar conditions: full sun, no UVB, PAR, and dark. Microbial communities established in waters with the longest surface exposure (e.g., isolated ponds) had the lowest BSP response to solar radiation treatments, and thus were likely best adapted to solar radiation exposure contrary to ground source waters. These results support our light history (solar exposure) hypothesis where the more isolated the community is from ground water sources, the better adapted it is to solar radiation. We suggest that factors other than solar radiation (e.g., salinity, PO43-, NO3-) are also important in determining microbial productivity in heterogeneous environments such as the Salar de Huasco. PMID:27920763
CENTAURUS A AS A POINT SOURCE OF ULTRAHIGH ENERGY COSMIC RAYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae, E-mail: hbkim@hanyang.ac.kr
We probe the possibility that Centaurus A (Cen A) is a point source of ultrahigh energy cosmic rays (UHECRs) observed by Pierre Auger Observatory (PAO), through the statistical analysis of the arrival direction distribution. For this purpose, we set up the Cen A dominance model for the UHECR sources, in which Cen A contributes the fraction f {sub C} of the whole UHECR with energy above 5.5 Multiplication-Sign 10{sup 19} eV and the isotropic background contributes the remaining 1 - f {sub C} fraction. The effect of the intergalactic magnetic fields on the bending of the trajectory of Cen Amore » originated UHECRs is parameterized by the Gaussian smearing angle {theta} {sub s}. For the statistical analysis, we adopted the correlational angular distance distribution (CADD) for the reduction of the arrival direction distribution and the Kuiper test to compare the observed and the expected CADDs. We identify the excess of UHECRs in the Cen A direction and fit the CADD of the observed PAO data by varying two parameters f {sub C} and {theta} {sub s} of the Cen A dominance model. The best-fit parameter values are f {sub C} Almost-Equal-To 0.1 (the corresponding Cen A fraction observed at PAO is f {sub C,PAO} Almost-Equal-To 0.15, that is, about 10 out of 69 UHECRs) and {theta} {sub s} = 5 Degree-Sign with the maximum likelihood L {sub max} = 0.29. This result supports the existence of a point source smeared by the intergalactic magnetic fields in the direction of Cen A. If Cen A is actually the source responsible for the observed excess of UHECRs, the rms deflection angle of the excess UHECRs implies the order of 10 nG intergalactic magnetic field in the vicinity of Cen A.« less
Berman, Amanda; Figueroa, Maria Elena; Storey, J Douglas
2017-01-01
During an emerging health crisis like the 2014 Ebola outbreak in West Africa, communicating with communities to learn from them and to provide timely information can be a challenge. Insight into community thinking, however, is crucial for developing appropriate communication content and strategies and for monitoring the progress of the emergency response. In November 2014, the Health Communication Capacity Collaborative partnered with GeoPoll to implement a Short Message Service (SMS)-based survey that could create a link with affected communities and help guide the communication response to Ebola. The ideation metatheory of communication and behavior change guided the design of the survey questionnaire, which produced critical insights into trusted sources of information, knowledge of transmission modes, and perceived risks-all factors relevant to the design of an effective communication response that further catalyzed ongoing community actions. The use of GeoPoll's infrastructure for data collection proved a crucial source of almost-real-time data. It allowed for rapid data collection and processing under chaotic field conditions. Though not a replacement for standard survey methodologies, SMS surveys can provide quick answers within a larger research process to decide on immediate steps for communication strategies when the demand for speedy emergency response is high. They can also help frame additional research as the response evolves and overall monitor the pulse of the situation at any point in time.
Electrically-detected magnetic resonance in semiconductor nanostructures inserted in microcavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagraev, Nikolay; Danilovskii, Eduard; Gets, Dmitrii
2013-12-04
We present the first findings of the new electrically-detected electron spin resonance technique (EDESR), which reveal the point defects in the ultra-narrow silicon quantum wells (Si-QW) confined by the superconductor δ-barriers. This technique allows the ESR identification without application of an external cavity, as well as a high frequency source and recorder, and with measuring the only response of the magnetoresistance caused by the microcavities embedded in the Si-QW plane.
Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks
2015-03-16
moderately-sized networks. As a consequence, throughout this effort, a simulated annealing (SA) algorithm will be employed to effectively search the...then increment k by 1 and repeat the search to find z∗3. Once can continue to increment k until W < zδ, at which point the algorithm will stop and...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
2013-09-01
of sperm whales. Although the methods developed in those papers demonstrate feasibility, they are not applicable to a)Author to whom correspondence...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...location clicks (Marques et al., 2009) instead of detecting individual animals or groups of animals; these cue- counting methods will not be specifically
Wilkinson, J R; Yu, J; Abbas, H K; Scheffler, B E; Kim, H S; Nierman, W C; Bhatnagar, D; Cleveland, T E
2007-10-01
Aflatoxins are toxic and carcinogenic polyketide metabolites produced by fungal species, including Aspergillus flavus and A. parasiticus. The biosynthesis of aflatoxins is modulated by many environmental factors, including the availability of a carbon source. The gene expression profile of A. parasiticus was evaluated during a shift from a medium with low concentration of simple sugars, yeast extract (YE), to a similar medium with sucrose, yeast extract sucrose (YES). Gene expression and aflatoxins (B1, B2, G1, and G2) were quantified from fungal mycelia harvested pre- and post-shifting. When compared with YE media, YES caused temporary reduction of the aflatoxin levels detected at 3-h post-shifting and they remained low well past 12 h post-shift. Aflatoxin levels did not exceed the levels in YE until 24 h post-shift, at which time point a tenfold increase was observed over YE. Microarray analysis comparing the RNA samples from the 48-h YE culture to the YES samples identified a total of 2120 genes that were expressed across all experiments, including most of the aflatoxin biosynthesis genes. One-way analysis of variance (ANOVA) identified 56 genes that were expressed with significant variation across all time points. Three genes responsible for converting norsolorinic acid to averantin were identified among these significantly expressed genes. The potential involvement of these genes in the regulation of aflatoxin biosynthesis is discussed.
Trabelsi, H; Gantri, M; Sediki, E
2010-01-01
We present a numerical model for the study of a general, two-dimensional, time-dependent, laser radiation transfer problem in a biological tissue. The model is suitable for many situations, especially when the external laser source is pulsed or continuous. We used a control volume discrete-ordinate method associated with an implicit, three-level, second-order, time-differencing scheme. In medical imaging by laser techniques, this could be an optical tomography forward model. We considered a very thin rectangular biological tissue-like medium submitted to a visible or a near-infrared laser source. Different cases were treated numerically. The source was assumed to be monochromatic and collimated. We used either a continuous source or a short-pulsed source. The transmitted radiance was computed in detector points on the boundaries. Also, the distribution of the internal radiation intensity for different instants is presented. According to the source type, we examined either the steady-state response or the transient response of the medium. First, our model was validated by experimental results from the literature for a homogeneous biological tissue. The space and angular grid independency of our results is shown. Next, the proposed model was used to study changes in transmitted radiation for a homogeneous background medium in which were imbedded two heterogeneous objects. As a last investigation, we studied a multilayered biological tissue. We simulated near-infrared radiation in human skin, fat and muscle. Some results concerning the effects of fat thickness and positions of the detector source on the reflected radiation are presented.
Piccolomini, Angelica A; Fiabon, Alex; Borrotti, Matteo; De Lucrezia, Davide
2017-01-01
We optimized the heterologous expression of trans-isoprenyl diphosphate synthase (IDS), the key enzyme involved in the biosynthesis of trans-polyisoprene. trans-Polyisoprene is a particularly valuable compound due to its superior stiffness, excellent insulation, and low thermal expansion coefficient. Currently, trans-polyisoprene is mainly produced through chemical synthesis and no biotechnological processes have been established so far for its large-scale production. In this work, we employed D-optimal design and response surface methodology to optimize the expression of thermophilic enzymes IDS from Thermococcus kodakaraensis. The design of experiment took into account of six factors (preinduction cell density, inducer concentration, postinduction temperature, salt concentration, alternative carbon source, and protein inhibitor) and seven culture media (LB, NZCYM, TB, M9, Ec, Ac, and EDAVIS) at five different pH points. By screening only 109 experimental points, we were able to improve IDS production by 48% in close-batch fermentation. © 2015 International Union of Biochemistry and Molecular Biology, Inc.
Coherent backscattering enhancement in cavities. Highlights of the role of symmetry.
Gallot, Thomas; Catheline, Stefan; Roux, Philippe
2011-04-01
Through experiments and simulations, the consequences of symmetry on coherent backscattering enhancement (CBE) are studied in cavities. Three main results are highlighted. First, the CBE outside the source is observed: (a) on a single symmetric point in a one-dimensional (1-D) cavity, in a disk and in a symmetric chaotic plate; (b) on three symmetric points in a two-dimensional (2-D) rectangle; and (c) on seven symmetric points in a three-dimensional (3-D) parallelepiped cavity. Second, the existence of enhanced intensity lines and planes in 2-D and 3-D simple-shape cavities is demonstrated. Third, it is shown how the anti-symmetry caused by the special boundary conditions is responsible for the existence of a coherent backscattering decrement with a dimensional dependence of R = (½)(d), with d = 1,2,3 as the dimensionality of the cavity.
Evaluation of multiple emission point facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miltenberger, R.P.; Hull, A.P.; Strachan, S.
In 1970, the New York State Department of Environmental Conservation (NYSDEC) assumed responsibility for the environmental aspect of the state's regulatory program for by-product, source, and special nuclear material. The major objective of this study was to provide consultation to NYSDEC and the US NRC to assist NYSDEC in determining if broad-based licensed facilities with multiple emission points were in compliance with NYCRR Part 380. Under this contract, BNL would evaluate a multiple emission point facility, identified by NYSDEC, as a case study. The review would be a nonbinding evaluation of the facility to determine likely dispersion characteristics, compliance withmore » specified release limits, and implementation of the ALARA philosophy regarding effluent release practices. From the data collected, guidance as to areas of future investigation and the impact of new federal regulations were to be developed. Reported here is the case study for the University of Rochester, Strong Memorial Medical Center and Riverside Campus.« less
Application of Least Mean Square Algorithms to Spacecraft Vibration Compensation
NASA Technical Reports Server (NTRS)
Woodard , Stanley E.; Nagchaudhuri, Abhijit
1998-01-01
This paper describes the application of the Least Mean Square (LMS) algorithm in tandem with the Filtered-X Least Mean Square algorithm for controlling a science instrument's line-of-sight pointing. Pointing error is caused by a periodic disturbance and spacecraft vibration. A least mean square algorithm is used on-orbit to produce the transfer function between the instrument's servo-mechanism and error sensor. The result is a set of adaptive transversal filter weights tuned to the transfer function. The Filtered-X LMS algorithm, which is an extension of the LMS, tunes a set of transversal filter weights to the transfer function between the disturbance source and the servo-mechanism's actuation signal. The servo-mechanism's resulting actuation counters the disturbance response and thus maintains accurate science instrumental pointing. A simulation model of the Upper Atmosphere Research Satellite is used to demonstrate the algorithms.
Long range laser traversing system
NASA Technical Reports Server (NTRS)
Caudill, L. O. (Inventor)
1974-01-01
The relative azimuth bearing between first and second spaced terrestrial points which may be obscured from each other by intervening terrain is measured by placing at one of the points a laser source for projecting a collimated beam upwardly in the vertical plane. The collimated laser beam is detected at the second point by positioning the optical axis of a receiving instrument for the laser beam in such a manner that the beam intercepts the optical axis. In response to the optical axis intercepting the beam, the beam is deflected into two different ray paths by a beam splitter having an apex located on the optical axis. The energy in the ray paths is detected by separate photoresponsive elements that drive logic networks for proving indications of: (1) the optical axis intercepting the beam; (2) the beam being on the left of the optical axis and (3) the beam being on the right side of the optical axis.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
Searches for point sources in the Galactic Center region
NASA Astrophysics Data System (ADS)
di Mauro, Mattia; Fermi-LAT Collaboration
2017-01-01
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
NASA Astrophysics Data System (ADS)
Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei
2013-09-01
China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
NASA Astrophysics Data System (ADS)
Chu, Zhigang; Yang, Yang; He, Yansong
2015-05-01
Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.
NASA Astrophysics Data System (ADS)
Drummond, J. D.; Bernal, S.; Meredith, W.; Schumer, R.; Martí Roca, E.
2017-12-01
Waste water treatment plant (WWTP) effluents constitute point source inputs of fine sediment, nutrients, carbon, and microbes to stream ecosystems. A range of responses to these inputs may be observed in recipient streams, including increases in respiration rates, which augment CO2 emissions to the atmosphere. Yet, little is known about which fractions of organic carbon (OC) contribute the most to stream metabolism in WWTP-influenced streams. Fine particulate OC (POC) represents ca. 40% of the total mass of OC in river networks, and is generally more labile than dissolved OC. Therefore, POC inputs from WWTPs could contribute disproportionately to higher rates of heterotrophic metabolism by stream microbial communities. The aim of this study was to investigate the influence of POC inputs from a WWTP effluent on the metabolism of a Mediterranean stream over a wide range of hydrologic conditions. We hypothesized that POC inputs would have a positive effect on respiration rates, and that the response to POC availability would be larger during low flows when the dilution capacity of the recipient stream is negligible. We focused on the easily resuspended fine sediment near the sediment-water interface (top 3 cm), as this region is a known hot spot for biogeochemical processes. For one year, samples of resuspended sediment were collected bimonthly at 7 sites from 0 to 800 m downstream of the WWTP point source. We measured total POC, organic matter (OM) content (%), and the associated metabolic activity of the resuspended sediment using the resazurin-resorufin smart tracer system as a proxy for aerobic ecosystem respiration. Resuspended sediment showed no difference in total POC over the year, while the OM content increased with decreasing discharge. This result together with the decreasing trend of total POC observed downstream of the point source during autumn after a long dry period, suggests that the WWTP effluent was the main contributor to stream POC. Furthermore, there was a positive relationship between aerobic ecosystem respiration and OM content in resuspended sediments. Our results suggest that WWTP effluents can be important sources of POC to recipient streams, and that the increased availability of POC enhances aerobic ecosystem respiration, especially when the dilution capacity of the recipient streams is low.
Point and Compact Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.
2017-12-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.
Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin
2013-03-01
One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.
Rikkerink, Erik H A
2018-03-08
Organisms face stress from multiple sources simultaneously and require mechanisms to respond to these scenarios if they are to survive in the long term. This overview focuses on a series of key points that illustrate how disorder and post-translational changes can combine to play a critical role in orchestrating the response of organisms to the stress of a changing environment. Increasingly, protein complexes are thought of as dynamic multi-component molecular machines able to adapt through compositional, conformational and/or post-translational modifications to control their largely metabolic outputs. These metabolites then feed into cellular physiological homeostasis or the production of secondary metabolites with novel anti-microbial properties. The control of adaptations to stress operates at multiple levels including the proteome and the dynamic nature of proteomic changes suggests a parallel with the equally dynamic epigenetic changes at the level of nucleic acids. Given their properties, I propose that some disordered protein platforms specifically enable organisms to sense and react rapidly as the first line of response to change. Using examples from the highly dynamic host-pathogen and host-stress response, I illustrate by example how disordered proteins are key to fulfilling the need for multiple levels of integration of response at different time scales to create robust control points.
Pediatric vision screening using binocular retinal birefringencr scanning
NASA Astrophysics Data System (ADS)
Nassif, Deborah S.; Gramatikov, Boris; Guyton, David L.; Hunter, David G.
2003-07-01
Amblyopia, a leading cause of vision loss in childhood, is responsive to treatment if detected early in life. Risk factors for amblyopia, such as refractive error and strabismus, may be difficult to identify clinically in young children. Our laboratory has developed retinal birefringence scanning (RBS), in which a small spot of polarized light is scanned in a circle on the retina, and the returning light is measured for changes in polarization caused by the pattern of birefringent fibers that comprise the fovea. Binocular RBS (BRBS) detects the fixation of both eyes simultaneously and thus screens for strabismus, one of the risk factors of amblyopia. We have also developed a technique to automatically detect when the eye is in focus without measuring refractive error. This focus detection system utilizes a bull's eye photodetector optically conjugate to a point fixation source. Reflected light is focused back to the point source by the optical system of the eye, and if the subject focuses on the fixation source, the returning light will be focused on the detector. We have constructed a hand-held prototype combining BRBS and focus detection measurements in one quick (< 0.5 second) and accurate (theoretically detecting +/-1 of misalignment) measurement. This approach has the potential to reliably identify children at risk for amblyopia.
Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E
2017-11-01
Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Probing infrared detectors through energy-absorption interferometry
NASA Astrophysics Data System (ADS)
Moinard, Dan; Withington, Stafford; Thomas, Christopher N.
2017-08-01
We describe an interferometric technique capable of fully characterizing the optical response of few-mode and multi-mode detectors using only power measurements, and its implementation at 1550 nm wavelength. EnergyAbsorption Interferometry (EAI) is an experimental procedure where the system under test is excited with two coherent, phase-locked sources. As the relative phase between the sources is varied, a fringe is observed in the detector output. Iterating over source positions, the fringes' complex visibilities allow the two-point detector response function to be retrieved: this correlation function corresponds to the state of coherence to which the detector is maximally sensitive. This detector response function can then be decomposed into a set of natural modes, in which the detector is incoherently sensitive to power. EAI therefore allows the reconstruction of the individual degrees of freedom through which the detector can absorb energy, including their relative sensitivities and full spatial forms. Coupling mechanisms into absorbing structures and their underlying solidstate phenomena can thus be studied, with direct applications in improving current infrared detector technology. EAI has previously been demonstrated for millimeter wavelength. Here, we outline the theoretical basis of EAI, and present a room-temperature 1550 nm wavelength infrared experiment we have constructed. Finally, we discuss how this experimental system will allow us to study optical coupling into fiber-based systems and near-infrared detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wicaksono, S.; Yoon, S.F.; Loke, W.K.
2006-05-15
GaAsSbN layers closely lattice-matched to GaAs were studied for application as the intrinsic layer in GaAs-based 1.3 {mu}m p-i-n photodetector. The GaAsSbN was grown as the intrinsic layer for the GaAs/GaAsSbN/GaAs photodetector structure using solid-source molecular beam epitaxy in conjunction with a radio frequency plasma-assisted nitrogen source and valved antimony cracker source. The lattice mismatch of the GaAsSbN layer to GaAs was kept below 4000 ppm, which is sufficient to maintain coherent growth of {approx}0.45 {mu}m thick GaAsSbN on the GaAs substrate. The growth temperature of the GaAsSbN layer was varied from 420-480 deg. C. All samples exhibit room temperaturemore » photocurrent response in the 1.3 {mu}m wavelength region, with dark current density of {approx}0.3-0.5 mA/cm{sup 2} and responsivity of up to 33 mA/W at 2 V reverse bias. Reciprocal space maps reveal traces of point defects and segregation (clustering) of N and Sb, which may have a detrimental effect on the photocurrent responsivity.« less
Skyshine at neutron energies less than or equal to 400 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.
1980-10-01
The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less
Analysis of the sources of uncertainty for EDR2 film‐based IMRT quality assurance
Shi, Chengyu; Papanikolaou, Nikos; Yan, Yulong; Weng, Xuejun; Jiang, gyu
2006-01-01
In our institution, patient‐specific quality assurance (QA) for intensity‐modulated radiation therapy (IMRT) is usually performed by measuring the dose to a point using an ion chamber and by measuring the dose to a plane using film. In order to perform absolute dose comparison measurements using film, an accurate calibration curve should be used. In this paper, we investigate the film response curve uncertainty factors, including film batch differences, film processor temperature effect, film digitization, and treatment unit. In addition, we reviewed 50 patient‐specific IMRT QA procedures performed in our institution in order to quantify the sources of error in film‐based dosimetry. Our study showed that the EDR2 film dosimetry can be done with less than 3% uncertainty. The EDR2 film response was not affected by the choice of treatment unit provided the nominal energy was the same. This investigation of the different sources of uncertainties in the film calibration procedure can provide a better understanding of the film‐based dosimetry and can improve quality control for IMRT QA. PACS numbers: 87.86.Cd, 87.53.Xd, 87.57.Nk PMID:17533329
Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.
1987-09-18
breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.
Tian, Xing; Poeppel, David; Huber, David E
2011-01-01
The open-source toolbox "TopoToolbox" is a suite of functions that use sensor topography to calculate psychologically meaningful measures (similarity, magnitude, and timing) from multisensor event-related EEG and MEG data. Using a GUI and data visualization, TopoToolbox can be used to calculate and test the topographic similarity between different conditions (Tian and Huber, 2008). This topographic similarity indicates whether different conditions involve a different distribution of underlying neural sources. Furthermore, this similarity calculation can be applied at different time points to discover when a response pattern emerges (Tian and Poeppel, 2010). Because the topographic patterns are obtained separately for each individual, these patterns are used to produce reliable measures of response magnitude that can be compared across individuals using conventional statistics (Davelaar et al. Submitted and Huber et al., 2008). TopoToolbox can be freely downloaded. It runs under MATLAB (The MathWorks, Inc.) and supports user-defined data structure as well as standard EEG/MEG data import using EEGLAB (Delorme and Makeig, 2004).
Fienen, Michael N.; Nolan, Bernard T.; Feinstein, Daniel T.
2016-01-01
For decision support, the insights and predictive power of numerical process models can be hampered by insufficient expertise and computational resources required to evaluate system response to new stresses. An alternative is to emulate the process model with a statistical “metamodel.” Built on a dataset of collocated numerical model input and output, a groundwater flow model was emulated using a Bayesian Network, an Artificial neural network, and a Gradient Boosted Regression Tree. The response of interest was surface water depletion expressed as the source of water-to-wells. The results have application for managing allocation of groundwater. Each technique was tuned using cross validation and further evaluated using a held-out dataset. A numerical MODFLOW-USG model of the Lake Michigan Basin, USA, was used for the evaluation. The performance and interpretability of each technique was compared pointing to advantages of each technique. The metamodel can extend to unmodeled areas.
Zhang, Jie; Wang, Peng; Li, Jingyi; Mendola, Pauline; Sherman, Seth; Ying, Qi
2016-12-01
A revised Community Multiscale Air Quality (CMAQ) model was developed to simulate the emission, reactions, transport, deposition and gas-to-particle partitioning processes of 16 priority polycyclic aromatic hydrocarbons (PAHs), as described in Part I of the two-part series. The updated CMAQ model was applied in this study to quantify the contributions of different emission sources to the predicted PAH concentrations and excess cancer risk in the United States (US) in 2011. The cancer risk in the continental US due to inhalation exposure of outdoor naphthalene (NAPH) and seven larger carcinogenic PAHs (cPAHs) was predicted to be significant. The incremental lifetime cancer risk (ILCR) exceeds 1×10 -5 in many urban and industrial areas. Exposure to PAHs was estimated to result in 5704 (608-10,800) excess lifetime cancer cases. Point sources not related with energy generation and the oil and gas processes account for approximately 31% of the excess cancer cases, followed by non-road engines with 18.6% contributions. Contributions of residential wood combustion (16.2%) are similar to that of transportation-related sources (mostly motor vehicles with small contributions from railway and marine vessels; 13.4%). The oil and gas industry emissions, although large contributors to high concentrations of cPAHs regionally, are only responsible of 4.3% of the excess cancer cases, which is similar to the contributions of non-US sources (6.8%) and non-point sources (7.2%). The power generation units pose the most minimal impact on excess cancer risk, with contributions of approximately 2.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Storm water runoff for the Y-12 Plant and selected parking lots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, E.T.
1996-01-01
A comparison of storm water runoff from the Y-12 Plant and selected employee vehicle parking lots to various industry data is provided in this document. This work is an outgrowth of and part of the continuing Non-Point Source Pollution Elimination Project that was initiated in the late 1980s. This project seeks to identify area pollution sources and remediate these areas through the Resource Conservation and Recovery Act/Comprehensive Environmental Response, Compensation, and Liability Act (RCRA/CERCLA) process as managed by the Environmental Restoration Organization staff. This work is also driven by the Clean Water Act Section 402(p) which, in part, deals withmore » establishing a National Pollutant Discharge Elimination System (NPDES) permit for storm water discharges. Storm water data from events occurring in 1988 through 1991 were analyzed in two reports: Feasibility Study for the Best Management Practices to Control Area Source Pollution Derived from Parking Lots at the DOE Y-12 Plant, September 1992, and Feasibility Study of Best Management Practices for Non-Point Source Pollution Control at the Oak Ridge Y-12 Plant, February 1993. These data consisted of analysis of outfalls discharging to upper East Fork Poplar Creek (EFPC) within the confines of the Y-12 Plant (see Appendixes D and E). These reports identified the major characteristics of concern as copper, iron, lead, manganese, mercury, nitrate (as nitrogen), zinc, biological oxygen demand (BOD), chemical oxygen demand (COD), total suspended solids (TSS), fecal coliform, and aluminum. Specific sources of these contaminants were not identifiable because flows upstream of outfalls were not sampled. In general, many of these contaminants were a concern in many outfalls. Therefore, separate sampling exercises were executed to assist in identifying (or eliminating) specific suspected sources as areas of concern.« less
Mapping algorithm for freeform construction using non-ideal light sources
NASA Astrophysics Data System (ADS)
Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.
2015-09-01
Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.
Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann
2012-02-01
A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America
Chandra Observations of the M31
NASA Technical Reports Server (NTRS)
Garcia, Michael; Lavoie, Anthony R. (Technical Monitor)
2000-01-01
We report on Chandra observations of the nearest Spiral Galaxy, M3l, The nuclear source seen with previous X-ray observatories is resolved into five point sources. One of these sources is within 1 arc-sec of the M31 central super-massive black hole. As compared to the other point sources in M3l. this nuclear source has an unusually soft spectrum. Based on the spatial coincidence and the unusual spectrum. we identify this source with the central black hole. A bright transient is detected 26 arc-sec to the west of the nucleus, which may be associated with a stellar mass black hole. We will report on a comparison of the x-ray spectrum of the diffuse emission and point sources seen in the central few arcmin
Waveform inversion of volcano-seismic signals for an extended source
Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.
2007-01-01
We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.
Whittington, Richard J; Paul-Pont, Ika; Evans, Olivia; Hick, Paul; Dhand, Navneet K
2018-04-10
Marine herpesviruses are responsible for epizootics in economically, ecologically and culturally significant taxa. The recent emergence of microvariants of Ostreid herpesvirus 1 (OsHV-1) in Pacific oysters Crassostrea gigas has resulted in socioeconomic losses in Europe, New Zealand and Australia however, there is no information on their origin or mode of transmission. These factors need to be understood because they influence the way the disease may be prevented and controlled. Mortality data obtained from experimental populations of C. gigas during natural epizootics of OsHV-1 disease in Australia were analysed qualitatively. In addition we compared actual mortality data with those from a Reed-Frost model of direct transmission and analysed incubation periods using Sartwell's method to test for the type of epizootic, point source or propagating. We concluded that outbreaks were initiated from an unknown environmental source which is unlikely to be farmed oysters in the same estuary. While direct oyster-to-oyster transmission may occur in larger oysters if they are in close proximity (< 40 cm), it did not explain the observed epizootics, point source exposure and indirect transmission being more common and important. A conceptual model is proposed for OsHV-1 index case source and transmission, leading to endemicity with recurrent seasonal outbreaks. The findings suggest that prevention and control of OsHV-1 in C. gigas will require multiple interventions. OsHV-1 in C. gigas, which is a sedentary animal once beyond the larval stage, is an informative model when considering marine host-herpesvirus relationships.
Multi-Disciplinary Approach to Trace Contamination of Streams and Beaches
Nickles, James
2008-01-01
Concentrations of fecal-indicator bacteria in urban streams and ocean beaches in and around Santa Barbara occasionally can exceed public-health standards for recreation. The U.S. Geological Survey (USGS), working with the City of Santa Barbara, has used multi-disciplinary science to trace the sources of the bacteria. This research is helping local agencies take steps to improve recreational water quality. The USGS used an approach that combined traditional hydrologic and microbiological data, with state-of-the-art genetic, molecular, and chemical tracer analysis. This research integrated physical data on streamflow, ground water, and near-shore oceanography, and made extensive use of modern geophysical and isotopic techniques. Using those techniques, the USGS was able to evaluate the movement of water and the exchange of ground water with near-shore ocean water. The USGS has found that most fecal bacteria in the urban streams came from storm-drain discharges, with the highest concentrations occurring during storm flow. During low streamflow, the concentrations varied as much as three-fold, owing to variable contribution of non-point sources such as outdoor water use and urban runoff to streamflow. Fecal indicator bacteria along ocean beaches were from both stream discharge to the ocean and from non-point sources such as bird fecal material that accumulates in kelp and sand at the high-tide line. Low levels of human-specific Bacteroides, suggesting fecal material from a human source, were consistently detected on area beaches. One potential source, a local sewer line buried beneath the beach, was found not to be responsible for the fecal bacteria.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Tsunami Forecasting in the Atlantic Basin
NASA Astrophysics Data System (ADS)
Knight, W. R.; Whitmore, P.; Sterling, K.; Hale, D. A.; Bahng, B.
2012-12-01
The mission of the West Coast and Alaska Tsunami Warning Center (WCATWC) is to provide advance tsunami warning and guidance to coastal communities within its Area-of-Responsibility (AOR). Predictive tsunami models, based on the shallow water wave equations, are an important part of the Center's guidance support. An Atlantic-based counterpart to the long-standing forecasting ability in the Pacific known as the Alaska Tsunami Forecast Model (ATFM) is now developed. The Atlantic forecasting method is based on ATFM version 2 which contains advanced capabilities over the original model; including better handling of the dynamic interactions between grids, inundation over dry land, new forecast model products, an optional non-hydrostatic approach, and the ability to pre-compute larger and more finely gridded regions using parallel computational techniques. The wide and nearly continuous Atlantic shelf region presents a challenge for forecast models. Our solution to this problem has been to develop a single unbroken high resolution sub-mesh (currently 30 arc-seconds), trimmed to the shelf break. This allows for edge wave propagation and for kilometer scale bathymetric feature resolution. Terminating the fine mesh at the 2000m isobath keeps the number of grid points manageable while allowing for a coarse (4 minute) mesh to adequately resolve deep water tsunami dynamics. Higher resolution sub-meshes are then included around coastal forecast points of interest. The WCATWC Atlantic AOR includes eastern U.S. and Canada, the U.S. Gulf of Mexico, Puerto Rico, and the Virgin Islands. Puerto Rico and the Virgin Islands are in very close proximity to well-known tsunami sources. Because travel times are under an hour and response must be immediate, our focus is on pre-computing many tsunami source "scenarios" and compiling those results into a database accessible and calibrated with observations during an event. Seismic source evaluation determines the order of model pre-computation - starting with those sources that carry the highest risk. Model computation zones are confined to regions at risk to save computation time. For example, Atlantic sources have been shown to not propagate into the Gulf of Mexico. Therefore, fine grid computations are not performed in the Gulf for Atlantic sources. Outputs from the Atlantic model include forecast marigrams at selected sites, maximum amplitudes, drawdowns, and currents for all coastal points. The maximum amplitude maps will be supplemented with contoured energy flux maps which show more clearly the effects of bathymetric features on tsunami wave propagation. During an event, forecast marigrams will be compared to observations to adjust the model results. The modified forecasts will then be used to set alert levels between coastal breakpoints, and provided to emergency management.
The differential mice response to cat and snake odor.
de Oliveira Crisanto, Karen; de Andrade, Wylqui Mikael Gomes; de Azevedo Silva, Kayo Diogenes; Lima, Ramón Hypolito; de Oliveira Costa, Miriam Stela Maris; de Souza Cavalcante, Jeferson; de Lima, Ruthnaldo Rodrigues Melo; do Nascimento, Expedito Silva; Cavalcante, Judney Cley
2015-12-01
Studies from the last two decades have pointed to multiple mechanisms of fear. For responding to predators, there is a group of highly interconnected hypothalamic nuclei formed by the anterior hypothalamic nucleus, the ventromedial hypothalamic nucleus and the dorsal premammillary nucleus—the predator-responsive hypothalamic circuit. This circuit expresses Fos in response to predator presence or its odor. Lesion of any component of this system blocks or reduces the expression of fear and consequently defensive behavior when faced with a predator or its cue. However, most of the knowledge about that circuit has been obtained using the rat as a model of prey and the cat as a source of predator cues. In the present study, we exposed mice to strong cat or snake odors, two known mice predators, and then we used the rat exposure test (RET) to study their behavior when confronted with the same predator's odor. Our data point to a differential response of mice exposed to these odors. When Swiss mice were exposed to the cat odor, they show defensive behavior and the predator-responsive hypothalamic circuit expressed Fos. The opposite was seen when they faced snake's odor. The acute odor exposure was not sufficient to activate the mouse predator-responsive hypothalamic circuit and the mice acted like they were not in a stressful situation, showing almost no sign of fear or defensive posture. This leads us to the conclusion that not all the predator cues are sufficient to activate the predator-responsive hypothalamic circuit of mice and that their response depends on the danger that these predators represent in the natural history of the prey.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Surface-water nutrient conditions and sources in the United States Pacific Northwest
Wise, D.R.; Johnson, H.M.
2011-01-01
The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.
2010-01-14
removed and a connector added for the use of external battery packs to extend measurement times. A rigid carbon- fiber pole was provided by the vendor...responses found in areas containing strongly ferromagnetic soils or bedrock have been well documented [5]. Fresh basaltic bedrock, like that found in...8650 (November 27, 2002). 5. “Demonstration of Basalt -UXO Discrimination by Advanced Analysis of Multi-Channel EM63 Data at Kaho’olawe, Hawaii,” G
Why there is no supernatural morality: response to Miller's opening statement.
Shermer, Michael
2016-11-01
If one is going to argue that objective morality depends on an Archimedean point outside the natural world, then it would seem to imply that this source is necessarily supernatural. Thus, Christian Miller begins by defining precisely who he thinks this supernatural moral law giver is: the omniscient, omnipotent, and omnibenevolent creator of the universe who is still actively involved with human affairs-Elohim, Jehovah, Yahweh, or Allah-aka God. Already I'm skeptical. © 2016 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Area Source Emission Measurements Using EPA OTM 10
Measurement of air pollutant emissions from area and non-point sources is an emerging environmental concern. Due to the spatial extent and non-homogenous nature of these sources, assessment of fugitive emissions using point sampling techniques can be difficult. To help address th...
BACTERIA SOURCE TRACKING AND HOST SPECIES SPECIFICITY ANALYSIS
Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the pollu...
Studying Dust Scattering Halos with Galactic X-ray Binaries
NASA Astrophysics Data System (ADS)
Beeler, Doreen; Corrales, Lia; Heinz, Sebastian
2018-01-01
Dust is an important part of the interstellar medium (ISM) and contributes to the formation of stars and planets. Since the advent of modern X-ray telescopes, Galactic X-ray point sources have permitted a closer look at all phases of the ISM. Interstellar metals from oxygen to iron — in both gas and dust form — are responsible for absorption and scattering of X-ray light. Dust scatters the light in a forward direction and creates a diffuse halo image surrounding many bright Galactic X-ray binaries. We use all the bright X-ray point sources available in the Chandra HETG archive to study dust scattering halos from the local ISM. We have described a data analysis pipeline using a combination of the data reduction software CIAO and Python. We compare our results from Chandra HETG and ACIS-I observations of a well studied dust scattering halo around GX 13+1, in order to characterize any systematic errors associated with the HETG data set. We describe how our data products will be used to measure ISM scaling relations for X-ray extinction, dust abundance, and dust-to-metal ratios.
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
RRAWFLOW: Rainfall-Response Aquifer and Watershed Flow Model (v1.15)
NASA Astrophysics Data System (ADS)
Long, A. J.
2015-03-01
The Rainfall-Response Aquifer and Watershed Flow Model (RRAWFLOW) is a lumped-parameter model that simulates streamflow, spring flow, groundwater level, or solute transport for a measurement point in response to a system input of precipitation, recharge, or solute injection. I introduce the first version of RRAWFLOW available for download and public use and describe additional options. The open-source code is written in the R language and is available at http://sd.water.usgs.gov/projects/RRAWFLOW/RRAWFLOW.html along with an example model of streamflow. RRAWFLOW includes a time-series process to estimate recharge from precipitation and simulates the response to recharge by convolution, i.e., the unit-hydrograph approach. Gamma functions are used for estimation of parametric impulse-response functions (IRFs); a combination of two gamma functions results in a double-peaked IRF. A spline fit to a set of control points is introduced as a new method for estimation of nonparametric IRFs. Several options are included to simulate time-variant systems. For many applications, lumped models simulate the system response with equal accuracy to that of distributed models, but moreover, the ease of model construction and calibration of lumped models makes them a good choice for many applications (e.g., estimating missing periods in a hydrologic record). RRAWFLOW provides professional hydrologists and students with an accessible and versatile tool for lumped-parameter modeling.
Birmingham, Wendy C; Holt-Lunstad, Julianne
2018-04-05
There is a rich literature on social support and physical health, but research has focused primarily on the protective effects of social relationship. The stress buffering model asserts that relationships may be protective by being a source of support when coping with stress, thereby blunting health relevant physiological responses. Research also indicates relationships can be a source of stress, also influencing health. In other words, the social buffering influence may have a counterpart, a social aggravating influence that has an opposite or opposing effect. Drawing upon existing conceptual models, we expand these to delineate how social relationships may influence stress processes and ultimately health. This review summarizes the existing literature that points to the potential deleterious physiological effects of our relationships when they are sources of stress or exacerbate stress. Copyright © 2018 Elsevier B.V. All rights reserved.
Relationship between landscape characteristics and surface water quality.
Chang, C L; Kuan, W H; Lui, P S; Hu, C Y
2008-12-01
The effects of landscape characteristics on surface water quality were evaluated in terms of land-use condition, soil type and slope. The case area, the Chichiawan stream in the Wulin catchment in Taiwan, is Formosan landlocked salmon's natural habitat. Due to the agriculture behavior and mankind's activities, the water and environmental quality has gradually worsened. This study applied WinVAST model to predict hydrological responses and non-point source pollution (NPSP) exports in the Wulin catchment. The land-use condition and the slope of land surface in a catchment are major effect factors for watershed responses, including flows and pollutant exports. This work discussed the possible variation of watershed responses induced by the change of land-use condition, soil type and slope, etc. The results show that hydrological responses are highly relative to the value of Curve Number (CN); Pollutant exports have large relation to the average slope of the land surface in the Wulin catchment.
Super-Resolution Imagery by Frequency Sweeping.
1980-08-15
IMAGE RETRIEVAL The above considerations of multiwavelength holography have lead us to determining a means by which the 3-D Fourier space of the...it at a distant bright point source. The point source used need not be derived from a laser. In fact it is preferable for safety purposes to use an LED ...noise and therefore higher reconstructed image quality can be attained by using nonlaser point sources in the reconstruction such as LED or miniature
Remotely measuring populations during a crisis by overlaying two data sources
Bharti, Nita; Lu, Xin; Bengtsson, Linus; Wetter, Erik; Tatem, Andrew J.
2015-01-01
Background Societal instability and crises can cause rapid, large-scale movements. These movements are poorly understood and difficult to measure but strongly impact health. Data on these movements are important for planning response efforts. We retrospectively analyzed movement patterns surrounding a 2010 humanitarian crisis caused by internal political conflict in Côte d'Ivoire using two different methods. Methods We used two remote measures, nighttime lights satellite imagery and anonymized mobile phone call detail records, to assess average population sizes as well as dynamic population changes. These data sources detect movements across different spatial and temporal scales. Results The two data sources showed strong agreement in average measures of population sizes. Because the spatiotemporal resolution of the data sources differed, we were able to obtain measurements on long- and short-term dynamic elements of populations at different points throughout the crisis. Conclusions Using complementary, remote data sources to measure movement shows promise for future use in humanitarian crises. We conclude with challenges of remotely measuring movement and provide suggestions for future research and methodological developments. PMID:25733558
A comprehensive experimental characterization of the iPIX gamma imager
NASA Astrophysics Data System (ADS)
Amgarou, K.; Paradiso, V.; Patoz, A.; Bonnet, F.; Handley, J.; Couturier, P.; Becker, F.; Menaa, N.
2016-08-01
The results of more than 280 different experiments aimed at exploring the main features and performances of a newly developed gamma imager, called iPIX, are summarized in this paper. iPIX is designed to quickly localize radioactive sources while estimating the ambient dose equivalent rate at the measurement point. It integrates a 1 mm thick CdTe detector directly bump-bonded to a Timepix chip, a tungsten coded-aperture mask, and a mini RGB camera. It also represents a major technological breakthrough in terms of lightness, compactness, usability, response sensitivity, and angular resolution. As an example of its key strengths, an 241Am source with a dose rate of only few nSv/h can be localized in less than one minute.
The VLITE Post-Processing Pipeline
NASA Astrophysics Data System (ADS)
Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.
2018-01-01
A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE;
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
Chen, Li-ding; Peng, Hong-jia; Fu, Bo-Jie; Qiu, Jun; Zhang, Shu-rong
2005-01-01
Surface waters can be contaminated by human activities in two ways: (1) by point sources, such as sewage treatment discharge and storm-water runoff; and (2) by non-point sources, such as runoff from urban and agricultural areas. With point-source pollution effectively controlled, non-point source pollution has become the most important environmental concern in the world. The formation of non-point source pollution is related to both the sources such as soil nutrient, the amount of fertilizer and pesticide applied, the amount of refuse, and the spatial complex combination of land uses within a heterogeneous landscape. Land-use change, dominated by human activities, has a significant impact on water resources and quality. In this study, fifteen surface water monitoring points in the Yuqiao Reservoir Basin, Zunhua, Hebei Province, northern China, were chosen to study the seasonal variation of nitrogen concentration in the surface water. Water samples were collected in low-flow period (June), high-flow period (July) and mean-flow period (October) from 1999 to 2000. The results indicated that the seasonal variation of nitrogen concentration in the surface water among the fifteen monitoring points in the rainfall-rich year is more complex than that in the rainfall-deficit year. It was found that the land use, the characteristics of the surface river system, rainfall, and human activities play an important role in the seasonal variation of N-concentration in surface water.
NASA Astrophysics Data System (ADS)
Voss, Anja; Bärlund, Ilona; Punzet, Manuel; Williams, Richard; Teichert, Ellen; Malve, Olli; Voß, Frank
2010-05-01
Although catchment scale modelling of water and solute transport and transformations is a widely used technique to study pollution pathways and effects of natural changes, policies and mitigation measures there are only a few examples of global water quality modelling. This work will provide a description of the new continental-scale model of water quality WorldQual and the analysis of model simulations under changed climate and anthropogenic conditions with respect to changes in diffuse and point loading as well as surface water quality. BOD is used as an indicator of the level of organic pollution and its oxygen-depleting potential, and for the overall health of aquatic ecosystems. The first application of this new water quality model is to river systems of Europe. The model itself is being developed as part of the EU-funded SCENES Project which has the principal goal of developing new scenarios of the future of freshwater resources in Europe. The aim of the model is to determine chemical fluxes in different pathways combining analysis of water quantity with water quality. Simple equations, consistent with the availability of data on the continental scale, are used to simulate the response of in-stream BOD concentrations to diffuse and anthropogenic point loadings as well as flow dilution. Point sources are divided into manufacturing, domestic and urban loadings, whereas diffuse loadings come from scattered settlements, agricultural input (for instance livestock farming), and also from natural background sources. The model is tested against measured longitudinal gradients and time series data at specific river locations with different loading characteristics like the Thames that is driven by domestic loading and Ebro with relative high share of diffuse loading. With scenario studies the influence of climate and anthropogenic changes on European water resources shall be investigated with the following questions: 1. What percentage of river systems will have degraded water quality due to different driving forces? 2. How will climate change and changes in wastewater discharges affect water quality? For the analysis these scenario aspects are included: 1. climate with changed runoff (affecting diffuse pollution and loading from sealed areas), river discharge (causing dilution or concentration of point source pollution) and water temperature (affecting BOD degradation). 2. Point sources with changed population (affecting domestic pollution), connectivity to treatment plants (influencing domestic and manufacturing pollution as well as input from sealed areas and scattered settlements).
DNA BASED MOLECULAR METHODS FOR BACTERIAL SOURCE TRACKING IN WATERSHEDS
Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the po...
AIR TOXICS ASSESSMENT REFINEMENT IN RAPCA'S JURISDICTION - DAYTON, OH AREA
RAPCA has receive two grants to conduct this project. As part of the original project, RAPCA has improved and expanded their point source inventory by converting the following area sources to point sources: dry cleaners, gasoline throughput processes and halogenated solvent clea...
"Stereo Compton cameras" for the 3-D localization of radioisotopes
NASA Astrophysics Data System (ADS)
Takeuchi, K.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Kishimoto, A.; Ohsuka, S.; Nakamura, S.; Adachi, S.; Hirayanagi, M.; Uchiyama, T.; Ishikawa, Y.; Kato, T.
2014-11-01
The Compton camera is a viable and convenient tool used to visualize the distribution of radioactive isotopes that emit gamma rays. After the nuclear disaster in Fukushima in 2011, there is a particularly urgent need to develop "gamma cameras", which can visualize the distribution of such radioisotopes. In response, we propose a portable Compton camera, which comprises 3-D position-sensitive GAGG scintillators coupled with thin monolithic MPPC arrays. The pulse-height ratio of two MPPC-arrays allocated at both ends of the scintillator block determines the depth of interaction (DOI), which dramatically improves the position resolution of the scintillation detectors. We report on the detailed optimization of the detector design, based on Geant4 simulation. The results indicate that detection efficiency reaches up to 0.54%, or more than 10 times that of other cameras being tested in Fukushima, along with a moderate angular resolution of 8.1° (FWHM). By applying the triangular surveying method, we also propose a new concept for the stereo measurement of gamma rays by using two Compton cameras, thus enabling the 3-D positional measurement of radioactive isotopes for the first time. From one point source simulation data, we ensured that the source position and the distance to the same could be determined typically to within 2 meters' accuracy and we also confirmed that more than two sources are clearly separated by the event selection from two point sources of simulation data.
Comparing stochastic point-source and finite-source ground-motion simulations: SMSIM and EXSIM
Boore, D.M.
2009-01-01
Comparisons of ground motions from two widely used point-source and finite-source ground-motion simulation programs (SMSIM and EXSIM) show that the following simple modifications in EXSIM will produce agreement in the motions from a small earthquake at a large distance for the two programs: (1) base the scaling of high frequencies on the integral of the squared Fourier acceleration spectrum; (2) do not truncate the time series from each subfault; (3) use the inverse of the subfault corner frequency for the duration of motions from each subfault; and (4) use a filter function to boost spectral amplitudes at frequencies near and less than the subfault corner frequencies. In addition, for SMSIM an effective distance is defined that accounts for geometrical spreading and anelastic attenuation from various parts of a finite fault. With these modifications, the Fourier and response spectra from SMSIM and EXSIM are similar to one another, even close to a large earthquake (M 7), when the motions are averaged over a random distribution of hypocenters. The modifications to EXSIM remove most of the differences in the Fourier spectra from simulations using pulsing and static subfaults; they also essentially eliminate any dependence of the EXSIM simulations on the number of subfaults. Simulations with the revised programs suggest that the results of Atkinson and Boore (2006), computed using an average stress parameter of 140 bars and the original version of EXSIM, are consistent with the revised EXSIM with a stress parameter near 250 bars.
NASA Technical Reports Server (NTRS)
Smith, Wayne Farrior
1973-01-01
The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.
Probing dim point sources in the inner Milky Way using PCAT
NASA Astrophysics Data System (ADS)
Daylan, Tansu; Portillo, Stephen K. N.; Finkbeiner, Douglas P.
2017-01-01
Poisson regression of the Fermi-LAT data in the inner Milky Way reveals an extended gamma-ray excess. An important question is whether the signal is coming from a collection of unresolved point sources, possibly old recycled pulsars, or constitutes a truly diffuse emission component. Previous analyses have relied on non-Poissonian template fits or wavelet decomposition of the Fermi-LAT data, which find evidence for a population of dim point sources just below the 3FGL flux limit. In order to be able to draw conclusions about the flux distribution of point sources at the dim end, we employ a Bayesian trans-dimensional MCMC framework by taking samples from the space of catalogs consistent with the observed gamma-ray emission in the inner Milky Way. The software implementation, PCAT (Probabilistic Cataloger), is designed to efficiently explore that catalog space in the crowded field limit such as in the galactic plane, where the model PSF, point source positions and fluxes are highly degenerate. We thus generate fair realizations of the underlying MSP population in the inner galaxy and constrain the population characteristics such as the radial and flux distribution of such sources.
Lopiano, Kenneth K; Young, Linda J; Gotway, Carol A
2014-09-01
Spatially referenced datasets arising from multiple sources are routinely combined to assess relationships among various outcomes and covariates. The geographical units associated with the data, such as the geographical coordinates or areal-level administrative units, are often spatially misaligned, that is, observed at different locations or aggregated over different geographical units. As a result, the covariate is often predicted at the locations where the response is observed. The method used to align disparate datasets must be accounted for when subsequently modeling the aligned data. Here we consider the case where kriging is used to align datasets in point-to-point and point-to-areal misalignment problems when the response variable is non-normally distributed. If the relationship is modeled using generalized linear models, the additional uncertainty induced from using the kriging mean as a covariate introduces a Berkson error structure. In this article, we develop a pseudo-penalized quasi-likelihood algorithm to account for the additional uncertainty when estimating regression parameters and associated measures of uncertainty. The method is applied to a point-to-point example assessing the relationship between low-birth weights and PM2.5 levels after the onset of the largest wildfire in Florida history, the Bugaboo scrub fire. A point-to-areal misalignment problem is presented where the relationship between asthma events in Florida's counties and PM2.5 levels after the onset of the fire is assessed. Finally, the method is evaluated using a simulation study. Our results indicate the method performs well in terms of coverage for 95% confidence intervals and naive methods that ignore the additional uncertainty tend to underestimate the variability associated with parameter estimates. The underestimation is most profound in Poisson regression models. © 2014, The International Biometric Society.
Extending the Search for Neutrino Point Sources with IceCube above the Horizon
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Alba, J. L. Bazo; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Carson, M.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; de Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; Deyoung, T.; Díaz-Vélez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Lehmann, R.; Lennarz, D.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miyamoto, H.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Paul, L.; de Los Heros, C. Pérez; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Toale, P. A.; Tooker, J.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; van Overloop, A.; van Santen, J.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.
2009-11-01
Point source searches with the IceCube neutrino telescope have been restricted to one hemisphere, due to the exclusive selection of upward going events as a way of rejecting the atmospheric muon background. We show that the region above the horizon can be included by suppressing the background through energy-sensitive cuts. This improves the sensitivity above PeV energies, previously not accessible for declinations of more than a few degrees below the horizon due to the absorption of neutrinos in Earth. We present results based on data collected with 22 strings of IceCube, extending its field of view and energy reach for point source searches. No significant excess above the atmospheric background is observed in a sky scan and in tests of source candidates. Upper limits are reported, which for the first time cover point sources in the southern sky up to EeV energies.
NASA Technical Reports Server (NTRS)
Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)
2017-01-01
A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.
NASA Astrophysics Data System (ADS)
Keawprasert, T.; Anhalt, K.; Taubert, D. R.; Sperling, A.; Schuster, M.; Nevas, S.
2013-09-01
An LP3 radiation thermometer was absolutely calibrated at a newly developed monochromator-based set-up and the TUneable Lasers in Photometry (TULIP) facility of PTB in the wavelength range from 400 nm to 1100 nm. At both facilities, the spectral radiation of the respective sources irradiates an integrating sphere, thus generating uniform radiance across its precision aperture. The spectral irradiance of the integrating sphere is determined via an effective area of a precision aperture and a Si trap detector, traceable to the primary cryogenic radiometer of PTB. Due to the limited output power from the monochromator, the absolute calibration was performed with the measurement uncertainty of 0.17 % (k = 1), while the respective uncertainty at the TULIP facility is 0.14 %. Calibration results obtained by the two facilities were compared in terms of spectral radiance responsivity, effective wavelength and integral responsivity. It was found that the measurement results in integral responsivity at the both facilities are in agreement within the expanded uncertainty (k = 2). To verify the calibration accuracy, the absolutely calibrated radiation thermometer was used to measure the thermodynamic freezing temperatures of the PTB gold fixed-point blackbody.
Clark, Deborah A
2004-03-29
How tropical rainforests are responding to the ongoing global changes in atmospheric composition and climate is little studied and poorly understood. Although rising atmospheric carbon dioxide (CO2) could enhance forest productivity, increased temperatures and drought are likely to diminish it. The limited field data have produced conflicting views of the net impacts of these changes so far. One set of studies has seemed to point to enhanced carbon uptake; however, questions have arisen about these findings, and recent experiments with tropical forest trees indicate carbon saturation of canopy leaves and no biomass increase under enhanced CO2. Other field observations indicate decreased forest productivity and increased tree mortality in recent years of peak temperatures and drought (strong El Niño episodes). To determine current climatic responses of forests around the world tropics will require careful annual monitoring of ecosystem performance in representative forests. To develop the necessary process-level understanding of these responses will require intensified experimentation at the whole-tree and stand levels. Finally, a more complete understanding of tropical rainforest carbon cycling is needed for determining whether these ecosystems are carbon sinks or sources now, and how this status might change during the next century.
40 CFR 428.96 - Pretreatment standards for new sources.
Code of Federal Regulations, 2011 CFR
2011-07-01
... GUIDELINES AND STANDARDS RUBBER MANUFACTURING POINT SOURCE CATEGORY Pan, Dry Digestion, and Mechanical... pollutant properties, controlled by this section, which may be discharged to a publicly owned treatment works by a new point source subject to the provisions of this subpart: Pollutant or pollutant property...
FECAL BACTERIA SOURCE TRACKING AND BACTEROIDES SPP. HOST SPECIES SPECIFICITY ANALYSIS
Point and non-point pollution sources of fecal pollution on a watershed adversely impact the quality of drinking source waters and recreational waters. States are required to develop total maximum daily loads (TMDLs) and devise best management practices (BMPs) to reduce the po...
Coronal bright points at 6cm wavelength
NASA Technical Reports Server (NTRS)
Fu, Qijun; Kundu, M. R.; Schmahl, E. J.
1988-01-01
Results are presented from observations of bright points at a wavelength of 6-cm using the VLA with a spatial resolution of 1.2 arcsec. During two hours of observations, 44 sources were detected with brightness temperatures between 2000 and 30,000 K. Of these sources, 27 are associated with weak dark He 10830 A features at distances less than 40 arcsecs. Consideration is given to variations in the source parameters and the relationship between ephemeral regions and bright points.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
Yi, Qitao; Chen, Qiuwen; Hu, Liuming; Shi, Wenqing
2017-05-16
This research developed an innovative approach to reveal nitrogen sources, transformation, and transport in large and complex river networks in the Taihu Lake basin using measurement of dual stable isotopes of nitrate. The spatial patterns of δ 15 N corresponded to the urbanization level, and the nitrogen cycle was associated with the hydrological regime at the basin level. During the high flow season of summer, nonpoint sources from fertilizer/soils and atmospheric deposition constituted the highest proportion of the total nitrogen load. The point sources from sewage/manure, with high ammonium concentrations and high δ 15 N and δ 18 O contents in the form of nitrate, accounted for the largest inputs among all sources during the low flow season of winter. Hot spot areas with heavy point source pollution were identified, and the pollutant transport routes were revealed. Nitrification occurred widely during the warm seasons, with decreased δ 18 O values; whereas great potential for denitrification existed during the low flow seasons of autumn and spring. The study showed that point source reduction could have effects over the short-term; however, long-term efforts to substantially control agriculture nonpoint sources are essential to eutrophication alleviation for the receiving lake, which clarifies the relationship between point and nonpoint source control.
Stamer, J.K.; Cherry, Rodney N.; Faye, R.E.; Kleckner, R.L.
1979-01-01
During the period April 1975 to June 1978, the U.S. Geological Survey conducted a river-quality assessment of the Upper Chattahoochee River basin in Georgia. One objective of the study was to assess the magnitudes, nature, and effects of point and non-point discharges in the Chattahoochee River basin from Atlanta to the West Point Dam. On an average annual basis and during the storm period of March 1215, 1976, non-point-source loads for most constituents analyzed were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 river miles downstream of Atlanta. Most of the non-point-source constituent loads in the Atlanta-to-Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads, and about 70 percent of the dissolved phosphorus loads at Whitesburg. During weekends, power generation at the upstream Buford Dam hydroelectric facility is minimal. Streamflow at the Atlanta station during dry-weather weekends is estimated to be about 1,200 ft3/s (cubic feet per second). Average daily dissolved-oxygen concentrations of less than 5.0 mg/L (milligrams per liter) occurred often in the river, about 20 river miles downstream from Atlanta during these periods from May to November. During a low-flow period, June 1-2, 1977, five municipal point sources contributed 63 percent of the ultimate biochemical oxygen demand, 97 percent of the ammonium nitrogen, 78 percent of the total nitrogen, and 90 percent of the total phosphorus loads at the Franklin station, at the upstream end of West Point Lake. Average daily concentrations of 13 mg/L of ultimate biochemical oxygen demand and 1.8 mg/L of ammonium nitrogen were observed about 2 river miles downstream from two of the municipal point sources. Carbonaceous and nitrogenous oxygen demands caused dissolved-oxygen concentrations between 4.1 and 5.0 mg/L to occur in a 22-mile reach of the river downstream from Atlanta. Nitrogenous oxygen demands were greater than carbonaceous oxygen demands in the reach from river mile 303 to 271, and carbonaceous demands were greater from river mile 271 to 235. The heat load from the Atkinson-McDonough thermoelectric power-plants caused a decrease in the dissolved-oxygen concentrations of about 0.2 mg/L. During a critical low-flow period, a streamflow at Atlanta of about 1,800 ft3/s, with present (1977) point-source flows of 185 ft3/s containing concentrations of 45 mg/L of ultimate biochemical oxygen demand and 15 mg/L of ammonium nitrogen, results in a computed minimum dissolved-oxygen concentration of 4.7 mg/L in the river downstream from Atlanta. In the year 2000, a streamflow at Atlanta of about 1,800 ft3/s with point-source flows of 373 ft3/s containing concentrations of 45 mg/L of ultimate biochemical oxygen demand and 5.0 mg/L of ammonium nitrogen, will result in a computed minimum dissolved-oxygen concentration of 5.0 mg/L. A streamflow of about 1,050 ft3/s at Atlanta in the year 2000 will result in a dissolved-oxygen concentration of 5.0 mg/L if point-source flows contain concentrations of 15 mg/L of ultimate biochemical oxygen demand and 5.0 mg/L of ammonium nitrogen. Phytoplankton concentrations in West Point Lake, about 70 river miles downstream from Atlanta, could exceed 3 million cells per milliliter during extended low-flow periods in the summer with present point- and non-point-source nitrogen and phosphorus loads. In the year 2000, phytoplankton concentrations in West Point Lake are not likely to exceed 700,000 cells per milliliter during extended low-flow periods in the summer, if phosphorus concentrations do not exceed 1.0 mg/L in point-source discharges.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
NASA Astrophysics Data System (ADS)
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment
Mitchel, J.A.; Martin, I.S.
2013-01-01
A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629
NASA Technical Reports Server (NTRS)
Leavy, Donald Lucien
1975-01-01
The electrical conductivity structure was studied of a spherically layered moon consistent with the very low frequency magnetic data collected on the lunar surface and by Explorer 35. In order to obtain good agreement with the lunar surface magnetometer observations, the inclusion of a void cavity behind the moon requires a conductivity at shallow depths higher than that of models having the solar wind impinging on all sides. By varying only the source parameters, a conductivity model can be found that yields a good fit to both the tangential response upstream and the radial response downstream. This model also satisfies the dark side tangential response in the frequency range above 0.006 Hz, but the few data points presently available below this range do not seem to agree with the theory.
Peng, Nie; Bang-Fa, Ni; Wei-Zhi, Tian
2013-02-01
Application of effective interaction depth (EID) principle for parametric normalization of full energy peak efficiencies at different counting positions, originally for quasi-point sources, has been extended to bulky sources (within ∅30 mm×40 mm) with arbitrary matrices. It is also proved that the EID function for quasi-point source can be directly used for cylindrical bulky sources (within ∅30 mm×40 mm) with the geometric center as effective point source for low atomic number (Z) and low density (D) media and high energy γ-rays. It is also found that in general EID for bulky sources is dependent upon Z and D of the medium and the energy of the γ-rays in question. In addition, the EID principle was theoretically verified by MCNP calculations. Copyright © 2012 Elsevier Ltd. All rights reserved.
MacBurn's cylinder test problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, Aleksei I.
2016-02-29
This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less
On-line calibration of high-response pressure transducers during jet-engine testing
NASA Technical Reports Server (NTRS)
Armentrout, E. C.
1974-01-01
Jet engine testing is reported concerned with the effect of inlet pressure and temperature distortions on engine performance and involves the use of numerous miniature pressure transducers. Despite recent improvements in the manufacture of miniature pressure transducers, they still exhibit sensitivity change and zero-shift with temperature and time. To obtain meaningful data, a calibration system is needed to determine these changes. A system has been developed which provides for computer selection of appropriate reference pressures selected from nine different sources to provide a two- or three-point calibration. Calibrations are made on command, before and sometimes after each data point. A unique no leak matrix valve design is used in the reference pressure system. Zero-shift corrections are measured and the values are automatically inserted into the data reduction program.
HerMES: point source catalogues from Herschel-SPIRE observations II
NASA Astrophysics Data System (ADS)
Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.
2014-11-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).
Laursen, J.; Normark, W.R.
2003-01-01
The Valparaiso Basin constitutes a unique and prominent deep-water forearc basin underlying a 40-km by 60-km mid-slope terrace at 2.5-km water depth on the central Chile margin. Seismic-reflection data, collected as part of the CONDOR investigation, image a 3-3.5-km thick sediment succession that fills a smoothly sagged, margin-parallel, elongated trough at the base of the upper slope. In response to underthrusting of the Juan Ferna??ndez Ridge on the Nazca plate, the basin fill is increasingly deformed in the seaward direction above seaward-vergent outer forearc compressional highs. Syn-depositional growth of a large, margin-parallel monoclinal high in conjunction with sagging of the inner trough of the basin created stratal geometries similar to those observed in forearc basins bordered by large accretionary prisms. Margin-parallel compressional ridges diverted turbidity currents along the basin axis and exerted a direct control on sediment depositional processes. As structural depressions became buried, transverse input from point sources on the adjacent upper slope formed complex fan systems with sediment waves characterising the overbank environment, common on many Pleistocene turbidite systems. Mass failure as a result of local topographic inversion formed a prominent mass-flow deposit, and ultimately resulted in canyon formation and hence a new focused point source feeding the basin. The Valparaiso Basin is presently filled to the spill point of the outer forearc highs, causing headward erosion of incipient canyons into the basin fill and allowing bypass of sediment to the Chile Trench. Age estimates that are constrained by subduction-related syn-depositional deformation of the upper 700-800m of the basin fill suggest that glacio-eustatic sea-level lowstands, in conjunction with accelerated denudation rates, within the past 350 ka may have contributed to the increase in simultaneously active point sources along the upper slope as well as an increased complexity of proximal depositional facies.
Sampayan, Stephen E.
2016-11-22
Apparatus, systems, and methods that provide an X-ray interrogation system having a plurality of stationary X-ray point sources arranged to substantially encircle an area or space to be interrogated. A plurality of stationary detectors are arranged to substantially encircle the area or space to be interrogated, A controller is adapted to control the stationary X-ray point sources to emit X-rays one at a time, and to control the stationary detectors to detect the X-rays emitted by the stationary X-ray point sources.
We investigated the efficacy of metabolomics for field-monitoring of fish exposed to waste water treatment plant (WWTP) effluents and non-point sources of chemical contamination. Lab-reared male fathead minnows (Pimephales promelas, FHM) were held in mobile monitoring units and e...
NASA Astrophysics Data System (ADS)
Zaldívar Huerta, Ignacio E.; Pérez Montaña, Diego F.; Nava, Pablo Hernández; Juárez, Alejandro García; Asomoza, Jorge Rodríguez; Leal Cruz, Ana L.
2013-12-01
We experimentally demonstrate the use of an electro-optical transmission system for distribution of video over long-haul optical point-to-point links using a microwave photonic filter in the frequency range of 0.01-10 GHz. The frequency response of the microwave photonic filter consists of four band-pass windows centered at frequencies that can be tailored to the function of the spectral free range of the optical source, the chromatic dispersion parameter of the optical fiber used, as well as the length of the optical link. In particular, filtering effect is obtained by the interaction of an externally modulated multimode laser diode emitting at 1.5 μm associated to the length of a dispersive optical fiber. Filtered microwave signals are used as electrical carriers to transmit TV-signal over long-haul optical links point-to-point. Transmission of TV-signal coded on the microwave band-pass windows located at 4.62, 6.86, 4.0 and 6.0 GHz are achieved over optical links of 25.25 km and 28.25 km, respectively. Practical applications for this approach lie in the field of the FTTH access network for distribution of services as video, voice, and data.
XMM-Newton Observations of NGC 253: Resolving the Emission Components in the Disk and Nuclear Area
NASA Technical Reports Server (NTRS)
Pietsch, W.; Borozdin, K. N.; Branduardi-Raymont, G.; Cappi, M.; Ehle, M.; Ferrando, P.; Freyberg, M. J.; Kahn, S. M.; Ponman, T. J.; Ptak, A.
2000-01-01
We describe the first XMM-Newton observations of the starburst galaxy NGC 253. As known from previous X-ray observations, NGC 253 shows a mixture of extended (disk and halo) and point-source emission. The high XMM-Newton throughput allows for the first time a detailed investigation of the spatial, spectral and variability properties of these components simultaneously. We detect a bright X-ray transient approx. 70 sec SSW of the nucleus and show the spectrum and light curve of the brightest point source (approx. 30 sec S of the nucleus, most likely a black-hole X-ray binary, BHXRB). The unprecedented combination of RGS and EPIC also sheds new light on the emission of the complex nuclear region, the X-ray plume and the disk diffuse emission. In particular, EPIC images reveal that the limb-brightening of the plume is mostly seen in higher ionization emission lines, while in the lower ionization lines, and below 0.5 keV, the plume is more homo- geneously structured, pointing to new interpretations as to the make up of the starburst-driven outflow. Assuming that type IIa supernova remnants (SNRs) are mostly responsible for the E greater than 4 keV emission, the detection with EPIC of the 6.7 keV line allows us to estimate a supernova rate within the nuclear starburst of 0.2 /yr.
NASA Astrophysics Data System (ADS)
Domagalski, J. L.
2013-12-01
The SPARROW (Spatially Referenced Regressions On Watershed Attributes) model allows for the simulation of nutrient transport at un-gauged catchments on a regional scale. The model was used to understand natural and anthropogenic factors affecting phosphorus transport in developed, undeveloped, and mixed watersheds. The SPARROW model is a statistical tool that allows for mass balance calculation of constituent sources, transport, and aquatic decay based upon a calibration of a subset of stream networks, where concentrations and discharge have been measured. Calibration is accomplished using potential sources for a given year and may include fertilizer, geological background (based on bed-sediment samples and aggregated with geochemical map units), point source discharge, and land use categories. NHD Plus version 2 was used to model the hydrologic system. Land to water transport variables tested were precipitation, permeability, soil type, tile drains, and irrigation. For this study area, point sources, cultivated land, and geological background are significant phosphorus sources to streams. Precipitation and clay content of soil are significant land to water transport variables and various stream sizes show significance with respect to aquatic decay. Specific rock types result in different levels of phosphorus loading and watershed yield. Some important geological sources are volcanic rocks (andesite and basalt), granodiorite, glacial deposits, and Mesozoic to Cenozoic marine deposits. Marine sediments vary in their phosphorus content, but are responsible for some of the highest natural phosphorus yields, especially along the Central and Southern California coast. The Miocene Monterey Formation was found to be an especially important local source in southern California. In contrast, mixed metamorphic and igneous assemblages such as argillites, peridotite, and shales of the Trinity Mountains of northern California result in some of the lowest phosphorus yields. The agriculturally productive Central Valley of California has a low amount of background phosphorus in spite of inputs from streams draining upland areas. Many years of intensive agriculture may be responsible for the decrease of soil phosphorus in that area. Watersheds with significant background sources of phosphorus and large amounts of cultivated land had some of the highest per hectare yields. Seven different stream systems important for water management, or to describe transport processes, were investigated in detail for downstream changes in sources and loads. For example, the Klamath River (Oregon and California) has intensive agriculture and andesite-derived phosphorus in the upper reach. The proportion of agricultural-derived phosphorus decreases as the river flows into California before discharge to the ocean. The river flows through at least three different types of geological background sources from high to intermediate to very low. Knowledge of the role of natural sources in developed watersheds is critical for developing nutrient management strategies and these model results will have applicability for the establishment of realistic nutrient criteria.
Characterisation of a resolution enhancing image inversion interferometer.
Wicker, Kai; Sindbert, Simon; Heintzmann, Rainer
2009-08-31
Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.
NASA Astrophysics Data System (ADS)
Onabolu, B.; Jimoh, O. D.; Igboro, S. B.; Sridhar, M. K. C.; Onyilo, G.; Gege, A.; Ilya, R.
In many Sub-Saharan countries such as Nigeria, inadequate access to safe drinking water is a serious problem with 37% in the region and 58% of rural Nigeria using unimproved sources. The global challenge to measuring household water quality as a determinant of safety is further compounded in Nigeria by the possibility of deterioration from source to point of use. This is associated with the use of decentralised water supply systems in rural areas which are not fully reticulated to the household taps, creating a need for an integrated water quality monitoring system. As an initial step towards establishing the system in the north west and north central zones of Nigeria, The Katsina State Rural Water and Sanitation Agency, responsible for ensuring access to safe water and adequate sanitation to about 6 million people carried out a three pronged study with the support of UNICEF Nigeria. Part 1 was an assessment of the legislative and policy framework, institutional arrangements and capacity for drinking water quality monitoring through desk top reviews and Key Informant Interviews (KII) to ascertain the institutional capacity requirements for developing the water quality monitoring system. Part II was a water quality study in 700 households of 23 communities in four local government areas. The objectives were to assess the safety of drinking water, compare the safety at source and household level and assess the possible contributory role of end users’ Knowledge Attitudes and Practices. These were achieved through water analysis, household water quality tracking, KII and questionnaires. Part III was the production of a visual documentary as an advocacy tool to increase awareness of the policy makers of the linkages between source management, treatment and end user water quality. The results indicate that except for pH, conductivity and manganese, the improved water sources were safe at source. However there was a deterioration in water quality between source and point of use in 18%, 12.5%, 27% and 50% of hand pump fitted boreholes, motorised boreholes, hand dug wells and streams respectively. Although no statistical correlation could be drawn between water management practices and water quality deterioration, the survey of the study households gave an indication of the possible contributory role of their knowledge, attitudes and practices to water contamination after provision. Some of the potential water related sources of contamination were poor source protection and location, use of unimproved water source and poor knowledge and practice of household water treatment methods, poor hand washing practices in terms of percentage that wash hands and use soap. Consequently 34 WASH departments have been created at the local government level towards establishment of a community based monitoring system and piloting has begun in Kaita local government area.
NASA Astrophysics Data System (ADS)
Petr, Rodney; Bykanov, Alexander; Freshman, Jay; Reilly, Dennis; Mangano, Joseph; Roche, Maureen; Dickenson, Jason; Burte, Mitchell; Heaton, John
2004-08-01
A high average power dense plasma focus (DPF), x-ray point source has been used to produce ˜70 nm line features in AlGaAs-based monolithic millimeter-wave integrated circuits (MMICs). The DPF source has produced up to 12 J per pulse of x-ray energy into 4π steradians at ˜1 keV effective wavelength in ˜2 Torr neon at pulse repetition rates up to 60 Hz, with an effective x-ray yield efficiency of ˜0.8%. Plasma temperature and electron concentration are estimated from the x-ray spectrum to be ˜170 eV and ˜5.1019 cm-3, respectively. The x-ray point source utilizes solid-state pulse power technology to extend the operating lifetime of electrodes and insulators in the DPF discharge. By eliminating current reversals in the DPF head, an anode electrode has demonstrated a lifetime of more than 5 million shots. The x-ray point source has also been operated continuously for 8 h run times at 27 Hz average pulse recurrent frequency. Measurements of shock waves produced by the plasma discharge indicate that overpressure pulses must be attenuated before a collimator can be integrated with the DPF point source.
NASA Astrophysics Data System (ADS)
Feld, R.; Slob, E. C.; Thorbecke, J.
2015-12-01
Creating virtual sources at locations where physical receivers have measured a response is known as seismic interferometry. A much appreciated benefit of interferometry is its independence of the actual source locations. The use of ambient noise as actual source is therefore not uncommon in this field. Ambient noise can be commercial noise, like for example mobile phone signals. For GPR this can be useful in cases where it is not possible to place a source, for instance when it is prohibited by laws and regulations. A mono-static GPR antenna can measure ambient noise. Interferometry by auto-correlation (AC) places a virtual source on this antenna's position, without actually transmitting anything. This can be used for pavement damage inspection. Earlier work showed very promising results with 2D numerical models of damaged pavement. 1D and 2D heterogeneities were compared, both modelled in a 2D pavement world. In a 1D heterogeneous model energy leaks away to the sides, whereas in a 2D heterogeneous model rays can reflect and therefore still add to the signal reconstruction (see illustration). In the first case the amount of stationary points is strictly limited, while in the other case the amount of stationary points is very large. We extend these models to a 3D world and optimise an experimental configuration. The illustration originates from the journal article under submission 'Non-destructive pavement damage inspection by mono-static GPR without transmitting anything' by R. Feld, E.C. Slob, and J.W. Thorbecke. (a) 2D heterogeneous pavement model with three irregular-shaped misalignments between the base and subbase layer (marked by arrows). Mono-antenna B-scan positions are shown schematically. (b) Ideal output: a real source at the receiver's position. The difference w.r.t. the trace found in the middle is shown. (c) AC output: a virtual source at the receiver's position. There is a clear overlap with the ideal output.
Chandra ACIS Sub-pixel Resolution
NASA Astrophysics Data System (ADS)
Kim, Dong-Woo; Anderson, C. S.; Mossman, A. E.; Allen, G. E.; Fabbiano, G.; Glotfelty, K. J.; Karovska, M.; Kashyap, V. L.; McDowell, J. C.
2011-05-01
We investigate how to achieve the best possible ACIS spatial resolution by binning in ACIS sub-pixel and applying an event repositioning algorithm after removing pixel-randomization from the pipeline data. We quantitatively assess the improvement in spatial resolution by (1) measuring point source sizes and (2) detecting faint point sources. The size of a bright (but no pile-up), on-axis point source can be reduced by about 20-30%. With the improve resolution, we detect 20% more faint sources when embedded on the extended, diffuse emission in a crowded field. We further discuss the false source rate of about 10% among the newly detected sources, using a few ultra-deep observations. We also find that the new algorithm does not introduce a grid structure by an aliasing effect for dithered observations and does not worsen the positional accuracy
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
First Near-infrared Imaging Polarimetry of Young Stellar Objects in the Circinus Molecular Cloud
NASA Astrophysics Data System (ADS)
Kwon, Jungmi; Nakagawa, Takao; Tamura, Motohide; Hough, James H.; Choi, Minho; Kandori, Ryo; Nagata, Tetsuya; Kang, Miju
2018-02-01
We present the results of near-infrared (NIR) linear imaging polarimetry in the J, H, and K s bands of the low-mass star cluster-forming region in the Circinus Molecular Cloud Complex. Using aperture polarimetry of point-like sources, positive detection of 314, 421, and 164 sources in the J, H, and K s bands, respectively, was determined from among 749 sources whose photometric magnitudes were measured. For the source classification of the 133 point-like sources whose polarization could be measured in all 3 bands, a color–color diagram was used. While most of the NIR polarizations of point-like sources are well-aligned and can be explained by dichroic polarization produced by aligned interstellar dust grains in the cloud, 123 highly polarized sources have also been identified with some criteria. The projected direction on the sky of the magnetic field in the Cir-MMS region is indicated by the mean polarization position angles (70°) of the point-like sources in the observed region, corresponding to approximately 1.6× 1.6 pc2. In addition, the magnetic field direction is compared with the outflow orientations associated with Infrared Astronomy Satellite sources, in which two sources were found to be aligned with each other and one source was not. We also show prominent polarization nebulosities over the Cir-MMS region for the first time. Our polarization data have revealed one clear infrared reflection nebula (IRN) and several candidate IRNe in the Cir-MMS field. In addition, the illuminating sources of the IRNe are identified with near- and mid-infrared sources.
Malling, Bente; Mortensen, Lene S; Scherpbier, Albert J J; Ringsted, Charlotte
2010-09-21
The educational climate is crucial in postgraduate medical education. Although leaders are in the position to influence the educational climate, the relationship between leadership skills and educational climate is unknown. This study investigates the relationship between the educational climate in clinical departments and the leadership skills of clinical consultants responsible for education. The study was a trans-sectional correlation study. The educational climate was investigated by a survey among all doctors (specialists and trainees) in the departments. Leadership skills of the consultants responsible for education were measured by multi-source feedback scores from heads of departments, peer consultants, and trainees. Doctors from 42 clinical departments representing 21 specialties participated. The response rate of the educational climate investigation was moderate 52% (420/811), Response rate was high in the multisource-feedback process 84.3% (420/498). The educational climate was scored quite high mean 3.9 (SD 0.3) on a five-point Likert scale. Likewise the leadership skills of the clinical consultants responsible for education were considered good, mean 5.4 (SD 0.6) on a seven-point Likert scale. There was no significant correlation between the scores concerning the educational climate and the scores on leadership skills, r = 0.17 (p = 0.29). This study found no relation between the educational climate and the leadership skills of the clinical consultants responsible for postgraduate medical education in clinical departments with the instruments used. Our results indicate that consultants responsible for education are in a weak position to influence the educational climate in the clinical department. Further studies are needed to explore, how heads of departments and other factors related to the clinical organisation could influence the educational climate.
NASA Astrophysics Data System (ADS)
Tong, Daniel Quansong; Kang, Daiwen; Aneja, Viney P.; Ray, John D.
2005-01-01
We present in this study both measurement-based and modeling analyses for elucidation of source attribution, influence areas, and process budget of reactive nitrogen oxides at two rural southeast United States sites (Great Smoky Mountains national park (GRSM) and Mammoth Cave national park (MACA)). Availability of nitrogen oxides is considered as the limiting factor to ozone production in these areas and the relative source contribution of reactive nitrogen oxides from point or mobile sources is important in understanding why these areas have high ozone. Using two independent observation-based techniques, multiple linear regression analysis and emission inventory analysis, we demonstrate that point sources contribute a minimum of 23% of total NOy at GRSM and 27% at MACA. The influence areas for these two sites, or origins of nitrogen oxides, are investigated using trajectory-cluster analysis. The result shows that air masses from the West and Southwest sweep over GRSM most frequently, while pollutants transported from the eastern half (i.e., East, Northeast, and Southeast) have limited influence (<10% out of all air masses) on air quality at GRSM. The processes responsible for formation and removal of reactive nitrogen oxides are investigated using a comprehensive 3-D air quality model (Multiscale Air Quality SImulation Platform (MAQSIP)). The NOy contribution associated with chemical transformations to NOz and O3, based on process budget analysis, is as follows: 32% and 84% for NOz, and 26% and 80% for O3 at GRSM and MACA, respectively. The similarity between NOz and O3 process budgets suggests a close association between nitrogen oxides and effective O3 production at these rural locations.
Auger, E.; D'Auria, L.; Martini, M.; Chouet, B.; Dawson, P.
2006-01-01
We present a comprehensive processing tool for the real-time analysis of the source mechanism of very long period (VLP) seismic data based on waveform inversions performed in the frequency domain for a point source. A search for the source providing the best-fitting solution is conducted over a three-dimensional grid of assumed source locations, in which the Green's functions associated with each point source are calculated by finite differences using the reciprocal relation between source and receiver. Tests performed on 62 nodes of a Linux cluster indicate that the waveform inversion and search for the best-fitting signal over 100,000 point sources require roughly 30 s of processing time for a 2-min-long record. The procedure is applied to post-processing of a data archive and to continuous automatic inversion of real-time data at Stromboli, providing insights into different modes of degassing at this volcano. Copyright 2006 by the American Geophysical Union.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326
Performance of Different Light Sources for the Absolute Calibration of Radiation Thermometers
NASA Astrophysics Data System (ADS)
Martín, M. J.; Mantilla, J. M.; del Campo, D.; Hernanz, M. L.; Pons, A.; Campos, J.
2017-09-01
The evolving mise en pratique for the definition of the kelvin (MeP-K) [1, 2] will, in its forthcoming edition, encourage the realization and dissemination of the thermodynamic temperature either directly (primary thermometry) or indirectly (relative primary thermometry) via fixed points with assigned reference thermodynamic temperatures. In the last years, the Centro Español de Metrología (CEM), in collaboration with the Instituto de Óptica of Consejo Superior de Investigaciones Científicas (IO-CSIC), has developed several setups for absolute calibration of standard radiation thermometers using the radiance method to allow CEM the direct dissemination of the thermodynamic temperature and the assignment of the thermodynamic temperatures to several fixed points. Different calibration facilities based on a monochromator and/or a laser and an integrating sphere have been developed to calibrate CEM's standard radiation thermometers (KE-LP2 and KE-LP4) and filter radiometer (FIRA2). This system is based on the one described in [3] placed in IO-CSIC. Different light sources have been tried and tested for measuring absolute spectral radiance responsivity: a Xe-Hg 500 W lamp, a supercontinuum laser NKT SuperK-EXR20 and a diode laser emitting at 6473 nm with a typical maximum power of 120 mW. Their advantages and disadvantages have been studied such as sensitivity to interferences generated by the laser inside the filter, flux stability generated by the radiant sources and so forth. This paper describes the setups used, the uncertainty budgets and the results obtained for the absolute temperatures of Cu, Co-C, Pt-C and Re-C fixed points, measured with the three thermometers with central wavelengths around 650 nm.
A spatial and seasonal assessment of river water chemistry across North West England.
Rothwell, J J; Dise, N B; Taylor, K G; Allott, T E H; Scholefield, P; Davies, H; Neal, C
2010-01-15
This paper presents information on the spatial and seasonal patterns of river water chemistry at approximately 800 sites in North West England based on data from the Environment Agency regional monitoring programme. Within a GIS framework, the linkages between average water chemistry (pH, sulphate, base cations, nutrients and metals) catchment characteristics (topography, land cover, soil hydrology, base flow index and geology), rainfall, deposition chemistry and geo-spatial information on discharge consents (point sources) are examined. Water quality maps reveal that there is a clear distinction between the uplands and lowlands. Upland waters are acidic and have low concentrations of base cations, explained by background geological sources and land cover. Localised high concentrations of metals occur in areas of the Cumbrian Fells which are subjected to mining effluent inputs. Nutrient concentrations are low in the uplands with the exception sites receiving effluent inputs from rural point sources. In the lowlands, both past and present human activities have a major impact on river water chemistry, especially in the urban and industrial heartlands of Greater Manchester, south Lancashire and Merseyside. Over 40% of the sites have average orthophosphate concentrations >0.1mg-Pl(-1). Results suggest that the dominant control on orthophosphate concentrations is point source contributions from sewage effluent inputs. Diffuse agricultural sources are also important, although this influence is masked by the impact of point sources. Average nitrate concentrations are linked to the coverage of arable land, although sewage effluent inputs have a significant effect on nitrate concentrations. Metal concentrations in the lowlands are linked to diffuse and point sources. The study demonstrates that point sources, as well as diffuse sources, need to be considered when targeting measures for the effective reduction in river nutrient concentrations. This issue is clearly important with regards to the European Union Water Framework Directive, eutrophication and river water quality. Copyright 2009 Elsevier B.V. All rights reserved.
Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2
NASA Astrophysics Data System (ADS)
Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.
2016-12-01
Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kobulnicky, Henry A.; Alexander, Michael J.; Babler, Brian L.
We characterize the completeness of point source lists from Spitzer Space Telescope surveys in the four Infrared Array Camera (IRAC) bandpasses, emphasizing the Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) programs (GLIMPSE I, II, 3D, 360; Deep GLIMPSE) and their resulting point source Catalogs and Archives. The analysis separately addresses effects of incompleteness resulting from high diffuse background emission and incompleteness resulting from point source confusion (i.e., crowding). An artificial star addition and extraction analysis demonstrates that completeness is strongly dependent on local background brightness and structure, with high-surface-brightness regions suffering up to five magnitudes of reduced sensitivity to pointmore » sources. This effect is most pronounced at the IRAC 5.8 and 8.0 {mu}m bands where UV-excited polycyclic aromatic hydrocarbon emission produces bright, complex structures (photodissociation regions). With regard to diffuse background effects, we provide the completeness as a function of stellar magnitude and diffuse background level in graphical and tabular formats. These data are suitable for estimating completeness in the low-source-density limit in any of the four IRAC bands in GLIMPSE Catalogs and Archives and some other Spitzer IRAC programs that employ similar observational strategies and are processed by the GLIMPSE pipeline. By performing the same analysis on smoothed images we show that the point source incompleteness is primarily a consequence of structure in the diffuse background emission rather than photon noise. With regard to source confusion in the high-source-density regions of the Galactic Plane, we provide figures illustrating the 90% completeness levels as a function of point source density at each band. We caution that completeness of the GLIMPSE 360/Deep GLIMPSE Catalogs is suppressed relative to the corresponding Archives as a consequence of rejecting stars that lie in the point-spread function wings of saturated sources. This effect is minor in regions of low saturated star density, such as toward the Outer Galaxy; this effect is significant along sightlines having a high density of saturated sources, especially for Deep GLIMPSE and other programs observing closer to the Galactic center using 12 s or longer exposure times.« less
Thorenz, Ute R; Kundel, Michael; Müller, Lars; Hoffmann, Thorsten
2012-11-01
In this work, we describe a simple diffusion capillary device for the generation of various organic test gases. Using a set of basic equations the output rate of the test gas devices can easily be predicted only based on the molecular formula and the boiling point of the compounds of interest. Since these parameters are easily accessible for a large number of potential analytes, even for those compounds which are typically not listed in physico-chemical handbooks or internet databases, the adjustment of the test gas source to the concentration range required for the individual analytical application is straightforward. The agreement of the predicted and measured values is shown to be valid for different groups of chemicals, such as halocarbons, alkanes, alkenes, and aromatic compounds and for different dimensions of the diffusion capillaries. The limits of the predictability of the output rates are explored and observed to result in an underprediction of the output rates when very thin capillaries are used. It is demonstrated that pressure variations are responsible for the observed deviation of the output rates. To overcome the influence of pressure variations and at the same time to establish a suitable test gas source for highly volatile compounds, also the usability of permeation sources is explored, for example for the generation of molecular bromine test gases.
Restoration of the ASCA Source Position Accuracy
NASA Astrophysics Data System (ADS)
Gotthelf, E. V.; Ueda, Y.; Fujimoto, R.; Kii, T.; Yamaoka, K.
2000-11-01
We present a calibration of the absolute pointing accuracy of the Advanced Satellite for Cosmology and Astrophysics (ASCA) which allows us to compensate for a large error (up to 1') in the derived source coordinates. We parameterize a temperature dependent deviation of the attitude solution which is responsible for this error. By analyzing ASCA coordinates of 100 bright active galactic nuclei, we show that it is possible to reduce the uncertainty in the sky position for any given observation by a factor of 4. The revised 90% error circle radius is then 12", consistent with preflight specifications, effectively restoring the full ASCA pointing accuracy. Herein, we derive an algorithm which compensates for this attitude error and present an internet-based table to be used to correct post facto the coordinate of all ASCA observations. While the above error circle is strictly applicable to data taken with the on-board Solid-state Imaging Spectrometers (SISs), similar coordinate corrections are derived for data obtained with the Gas Imaging Spectrometers (GISs), which, however, have additional instrumental uncertainties. The 90% error circle radius for the central 20' diameter of the GIS is 24". The large reduction in the error circle area for the two instruments offers the opportunity to greatly enhance the search for X-ray counterparts at other wavelengths. This has important implications for current and future ASCA source catalogs and surveys.
Magnetic Topology of Coronal Hole Linkages
NASA Technical Reports Server (NTRS)
Titov, V. S.; Mikic, Z.; Linker, J. A.; Lionello, R.; Antiochos, S. K.
2010-01-01
In recent work, Antiochos and coworkers argued that the boundary between the open and closed field regions on the Sun can be extremely complex with narrow corridors of open ux connecting seemingly disconnected coronal holes from the main polar holes, and that these corridors may be the sources of the slow solar wind. We examine, in detail, the topology of such magnetic configurations using an analytical source surface model that allows for analysis of the eld with arbitrary resolution. Our analysis reveals three important new results: First, a coronal hole boundary can join stably to the separatrix boundary of a parasitic polarity region. Second, a single parasitic polarity region can produce multiple null points in the corona and, more important, separator lines connecting these points. Such topologies are extremely favorable for magnetic reconnection, because it can now occur over the entire length of the separators rather than being con ned to a small region around the nulls. Finally, the coronal holes are not connected by an open- eld corridor of finite width, but instead are linked by a singular line that coincides with the separatrix footprint of the parasitic polarity. We investigate how the topological features described above evolve in response to motion of the parasitic polarity region. The implications of our results for the sources of the slow solar wind and for coronal and heliospheric observations are discussed.
System for controlling the operating temperature of a fuel cell
Fabis, Thomas R.; Makiel, Joseph M.; Veyo, Stephen E.
2006-06-06
A method and system are provided for improved control of the operating temperature of a fuel cell (32) utilizing an improved temperature control system (30) that varies the flow rate of inlet air entering the fuel cell (32) in response to changes in the operating temperature of the fuel cell (32). Consistent with the invention an improved temperature control system (30) is provided that includes a controller (37) that receives an indication of the temperature of the inlet air from a temperature sensor (39) and varies the heat output by at least one heat source (34, 36) to maintain the temperature of the inlet air at a set-point T.sub.inset. The controller (37) also receives an indication of the operating temperature of the fuel cell (32) and varies the flow output by an adjustable air mover (33), within a predetermined range around a set-point F.sub.set, in order to maintain the operating temperature of the fuel cell (32) at a set-point T.sub.opset.
Generalized cable equation model for myelinated nerve fiber.
Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph
2005-10-01
Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.
Surface-Water Nutrient Conditions and Sources in the United States Pacific Northwest1
Wise, Daniel R; Johnson, Henry M
2011-01-01
Abstract The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts. PMID:22457584
Antennal pointing at a looming object in the cricket Acheta domesticus.
Yamawaki, Yoshifumi; Ishibashi, Wakako
2014-01-01
Antennal pointing responses to approaching objects were observed in the house cricket Acheta domesticus. In response to a ball approaching from the lateral side, crickets oriented the antenna ipsilateral to the ball towards it. In response to a ball approaching from the front, crickets oriented both antennae forward. Response rates of antennal pointing were higher when the ball was approaching from the front than from behind. The antennal angle ipsilateral to the approaching ball was positively correlated with approaching angle of the ball. Obstructing the cricket's sight decreased the response rate of antennal pointing, suggesting that this response was elicited mainly by visual stimuli. Although the response rates of antennal pointing decreased when the object ceased its approach at a great distance from the cricket, antennal pointing appeared to be resistant to habituation and was not substantially affected by the velocity, size and trajectory of an approaching ball. When presented with computer-generated visual stimuli, crickets frequently showed the antennal pointing response to a darkening stimulus as well as looming and linearly-expanding stimuli. Drifting gratings rarely elicited the antennal pointing. These results suggest that luminance change is sufficient to elicit antennal pointing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pôrto, Angela
2006-01-01
Although the nineteenth century saw numerous attempts to deter the slave trade, it was also the period when Brazil imported the greatest number of slaves in its history. The conditions under which slaves were transported, worked, and lived were largely responsible for their state of health. Yet this topic barely makes an appearance in the field of history, and many disputed points remain to be settled. My research cross-references sources and topics in order to gather data on the hygienic lives of nineteenth-century slaves. By analyzing archival documents from hospitals, notary public offices, and church bodies, iconographic sources, and the medical literature, I have retrieved information that can be used towards writing a history of the healthcare system available to slaves.
Opportunities and Challenges for Natural Products as Novel Antituberculosis Agents.
Farah, Shrouq I; Abdelrahman, Abd Almonem; North, E Jeffrey; Chauhan, Harsh
2016-01-01
Current tuberculosis (TB) treatment suffers from complexity of the dosage regimens, length of treatment, and toxicity risks. Many natural products have shown activity against drug-susceptible, drug-resistant, and latent/dormant Mycobacterium tuberculosis, the pathogen responsible for TB infections. Natural sources, including plants, fungi, and bacteria, provide a rich source of chemically diverse compounds equipped with unique pharmacological, pharmacokinetic, and pharmacodynamic properties. This review focuses on natural products as starting points for the discovery and development of novel anti-TB chemotherapy and classifies them based on their chemical nature. The classes discussed are divided into alkaloids, chalcones, flavonoids, peptides, polyketides, steroids, and terpenes. This review also highlights the importance of collaboration between phytochemistry, medicinal chemistry, and physical chemistry, which is very important for the development of these natural compounds.
Point kernel calculations of skyshine exposure rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roseberry, M.L.; Shultis, J.K.
1982-02-01
A simple point kernel model is presented for the calculation of skyshine exposure rates arising from the atmospheric reflection of gamma radiation produced by a vertically collimated or a shielded point source. This model is shown to be in good agreement with benchmark experimental data from a /sup 60/Co source for distances out to 700 m.
In order to effectively control inputs of contamination to coastal recreational waters, an improved understanding of the impact of both point and non-point sources of urban runoff is needed. In this study, we focused on the effect of non-point source urban runoff on the enterococ...
A deeper look at the X-ray point source population of NGC 4472
NASA Astrophysics Data System (ADS)
Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.
2017-10-01
In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.
A Boltzmann constant determination based on Johnson noise thermometry
NASA Astrophysics Data System (ADS)
Flowers-Jacobs, N. E.; Pollarolo, A.; Coakley, K. J.; Fox, A. E.; Rogalla, H.; Tew, W. L.; Benz, S. P.
2017-10-01
A value for the Boltzmann constant was measured electronically using an improved version of the Johnson Noise Thermometry (JNT) system at the National Institute of Standards and Technology (NIST), USA. This system is different from prior ones, including those from the 2011 determination at NIST and both 2015 and 2017 determinations at the National Institute of Metrology (NIM), China. As in all three previous determinations, the main contribution to the combined uncertainty is the statistical uncertainty in the noise measurement, which is mitigated by accumulating and integrating many weeks of cross-correlated measured data. The second major uncertainty contribution also still results from variations in the frequency response of the ratio of the measured spectral noise of the two noise sources, the sense resistor at the triple-point of water and the superconducting quantum voltage noise source. In this paper, we briefly describe the major differences between our JNT system and previous systems, in particular the input circuit and approach we used to match the frequency responses of the two noise sources. After analyzing and integrating 50 d of accumulated data, we determined a value: k~=1.380 642 9(69)× {{10}-23} J K-1 with a relative standard uncertainty of 5.0× {{10}-6} and relative offset -4.05× {{10}-6} from the CODATA 2014 recommended value.
Electrically-detected ESR in silicon nanostructures inserted in microcavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagraev, Nikolay; Danilovskii, Eduard; Gets, Dmitrii
2014-02-21
We present the first findings of the new electrically-detected electron spin resonance technique (EDESR), which reveal the point defects in the ultra-narrow silicon quantum wells (Si-QW) confined by the superconductor δ- barriers. This technique allows the ESR identification without application of an external cavity, as well as a high frequency source and recorder, and with measuring the only response of the magnetoresistance, with internal GHz Josephson emission within frameworks of the normal-mode coupling (NMC) caused by the microcavities embedded in the Si-QW plane.
NASA Technical Reports Server (NTRS)
Rosenstein, H.; Mcveigh, M. A.; Mollenkof, P. A.
1973-01-01
A mathematical model for a real time simulation of a tilt rotor aircraft was developed. The mathematical model is used for evaluating aircraft performance and handling qualities. The model is based on an eleven degree of freedom total force representation. The rotor is treated as a point source of forces and moments with appropriate response time lags and actuator dynamics. The aerodynamics of the wing, tail, rotors, landing gear, and fuselage are included.
Rio Grande valley Colorado new Mexico and Texas
Ellis, Sherman R.; Levings, Gary W.; Carter, Lisa F.; Richey, Steven F.; Radell, Mary Jo
1993-01-01
Two structural settings are found in the study unit: alluvial basins and bedrock basins. The alluvial basins can have through-flowing surface water or be closed basins. The discussion of streamflow and water quality for the surface-water system is based on four river reaches for the 750 miles of the main stem. the quality of the ground water is affected by both natural process and human activities and by nonpoint and point sources. Nonpoint sources for surface water include agriculture, hydromodification, and mining operations; point sources are mainly discharge from wastewater treatment plants. Nonpoint sources for ground water include agriculture and septic tanks and cesspools; point sources include leaking underground storage tanks, unlined or manure-lined holding ponds used for disposal of dairy wastes, landfills, and mining operations.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
NASA Astrophysics Data System (ADS)
Pikridas, Michael; Sciare, Jean; Vrekoussis, Mihalis; Oikonomou, Konstantina; Merabet, Hamza; Mihalopoulos, Nikos; Yassaa, Nouredine; Savvides, Chrysanthos
2016-04-01
As part of MISTRALS-ChArMEx (Chemistry-Aerosol Mediterranean Experiment, http://charmex.lsce.ipsl.fr/), and MISTRALS-ENVI-Med "CyAr" (Cyprus Aerosols and gas precursors) programs, a 1-month intensive field campaign has been performed in December 2014 at an urban background site of Nicosia (Cyprus) - a typical European city of the Eastern Mediterranean - with the objective to document the major (local versus imported) sources responsible for wintertime particulate (PM1) pollution. Several near real-time analyzers were deployed for that purpose (TEOM 1400, OPC Grimm 1.108, Q-ACSM, Aethalometer AE31) allowing to investigate in near-real time the major chemical components of submicron aerosols (Black Carbon, Organics, Sulphate, Nitrate, Ammonium). Quality control of Q-ACSM and Aethalometer datasets was performed through closure studies (using co-located TEOM / OPC Grimm). Comparisons were also performed with other on-line / off-line measurements performed by the local Air quality network (DLI) at other locations in Nicosia with the objective to check the consistency and representativeness of our observations. Very high levels of Black Carbon and OA were systematically observed every night (with maximum concentrations around 22:00 local time) pointing to local combustion sources most probably related to domestic heating. Source apportionment of organic aerosols (OA) was performed using the SourceFinder software (SoFi, http://www.psi.ch/acsm-stations/me-2) allowing the distinction between various primary/secondary OA sources and helped us to better characterize the combustion sources being responsible for the observed elevated nighttime PM1 levels. Acknowledgements: This campaign has been funded by MISTRALS (ChArMEx et ENVI-Med CyAr programs), CNRS-INSU, CEA, CyI, DLI, CDER and ECPL.
Using EMAP data from the NE Wadeable Stream Survey and state datasets (CT, ME), assessment tools were developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Classification schemes were...
Assessment tools are being developed to predict diffuse NPS effects from watershed development and distinguish these from local impacts (point sources, contaminated sediments). Using EMAP data from the New England Wadeable Stream Survey and two state datasets (CT, ME), we are de...
Brooke, Russell J; Kretzschmar, Mirjam E E; Hackert, Volker; Hoebe, Christian J P A; Teunis, Peter F M; Waller, Lance A
2017-01-01
We develop a novel approach to study an outbreak of Q fever in 2009 in the Netherlands by combining a human dose-response model with geostatistics prediction to relate probability of infection and associated probability of illness to an effective dose of Coxiella burnetii. The spatial distribution of the 220 notified cases in the at-risk population are translated into a smooth spatial field of dose. Based on these symptomatic cases, the dose-response model predicts a median of 611 asymptomatic infections (95% range: 410, 1,084) for the 220 reported symptomatic cases in the at-risk population; 2.78 (95% range: 1.86, 4.93) asymptomatic infections for each reported case. The low attack rates observed during the outbreak range from (Equation is included in full-text article.)to (Equation is included in full-text article.). The estimated peak levels of exposure extend to the north-east from the point source with an increasing proportion of asymptomatic infections further from the source. Our work combines established methodology from model-based geostatistics and dose-response modeling allowing for a novel approach to study outbreaks. Unobserved infections and the spatially varying effective dose can be predicted using the flexible framework without assuming any underlying spatial structure of the outbreak process. Such predictions are important for targeting interventions during an outbreak, estimating future disease burden, and determining acceptable risk levels.
Wade, Mark; Madigan, Sheri; Akbari, Emis; Jenkins, Jennifer M
2015-01-01
At 18 months, children show marked variability in their social-cognitive skill development, and the preponderance of past research has focused on constitutional and contextual factors in explaining this variability. Extending this literature, the current study examined whether cumulative biomedical risk represents another source of variability in social cognition at 18 months. Further, we aimed to determine whether responsive parenting moderated the association between biomedical risk and social cognition. A prospective community birth cohort of 501 families was recruited at the time of the child's birth. Cumulative biomedical risk was measured as a count of 10 prenatal/birth complications. Families were followed up at 18 months, at which point social-cognitive data was collected on children's joint attention, empathy, cooperation, and self-recognition using previously validated tasks. Concurrently, responsive maternal behavior was assessed through observational coding of mother-child interactions. After controlling for covariates (e.g., age, gender, child language, socioeconomic variables), both cumulative biomedical risk and maternal responsivity significantly predicted social cognition at 18 months. Above and beyond these main effects, there was also a significant interaction between biomedical risk and maternal responsivity, such that higher biomedical risk was significantly associated with compromised social cognition at 18 months, but only in children who experienced low levels of responsive parenting. For those receiving comparatively high levels of responsive parenting, there was no apparent effect of biomedical risk on social cognition. This study shows that cumulative biomedical risk may be one source of inter-individual variability in social cognition at 18 months. However, positive postnatal experiences, particularly high levels of responsive parenting, may protect children against the deleterious effects of these risks on social cognition.
Testing the seismology-based landquake monitoring system
NASA Astrophysics Data System (ADS)
Chao, Wei-An
2016-04-01
I have developed a real-time landquake monitoring system (RLMs), which monitor large-scale landquake activities in the Taiwan using real-time seismic network of Broadband Array in Taiwan for Seismology (BATS). The RLM system applies a grid-based general source inversion (GSI) technique to obtain the preliminary source location and force mechanism. A 2-D virtual source-grid on the Taiwan Island is created with an interval of 0.2° in both latitude and longitude. The depth of each grid point is fixed on the free surface topography. A database is stored on the hard disk for the synthetics, which are obtained using Green's functions computed by the propagator matrix approach for 1-D average velocity model, at all stations from each virtual source-grid due to nine elementary source components: six elementary moment tensors and three orthogonal (north, east and vertical) single-forces. Offline RLM system was carried out for events detected in previous studies. An important aspect of the RLM system is the implementation of GSI approach for different source types (e.g., full moment tensor, double couple faulting, and explosion source) by the grid search through the 2-D virtual source to automatically identify landquake event based on the improvement in waveform fitness and evaluate the best-fit solution in the monitoring area. With this approach, not only the force mechanisms but also the event occurrence time and location can be obtained simultaneously about 6-8 min after an occurrence of an event. To improve the insufficient accuracy of GSI-determined lotion, I further conduct a landquake epicenter determination (LED) method that maximizes the coherency of the high-frequency (1-3 Hz) horizontal envelope functions to determine the final source location. With good knowledge about the source location, I perform landquake force history (LFH) inversion to investigate the source dynamics (e.g., trajectory) for the relatively large-sized landquake event. With providing aforementioned source information in real-time, the government and emergency response agencies have sufficient reaction time for rapid assessment and response to landquake hazards. Since 2016, the RLM system has operated online.
SXDF-UDS-CANDELS-ALMA 1.5 arcmin2 deep survey
NASA Astrophysics Data System (ADS)
Kohno, Kotaro; Tamura, Yoichi; Yamaguchi, Yuki; Umehata, Hideki; Rujopakarn, Wiphu; Lee, Minju; Motohara, Kentaro; Makiya, Ryu; Izumi, Takuma; Ivison, Rob; Ikarashi, Soh; Tadaki, Ken-ichi; Kodama, Tadayuki; Hatsukade, Bunyo; Yabe, Kiyoto; Hayashi, Masao; Iono, Daisuke; Matsuda, Yuichi; Nakanishi, Kouichiro; Kawabe, Ryohei; Wilson, Grant; Yun, Min S.; Hughes, David; Caputi, Karina; Dunlop, James
2015-08-01
We have conducted 1.1 mm ALMA observations of a contiguous 105″ × 50″ or 1.5 arcmin2 window (achieved by 19 point mosaic) in the SXDF-UDS-CANDELS. We achieved a 5σ sensitivity of 0.28 mJy, giving a flat sensus of dusty star-forming galaxies with LIR ~6 × 1011 L⊙ (if Tdust = 40 K) or SFR ~100 M⊙ yr-1 up to z~10 thanks to the negative K-correction at this wavelength. We detect 5 brightest sources (S/N>6) and 18 low-significant sources (5 > S/N > 4; they may contain spurious detections, though) in the field. We find that these discrete sources are responsible for a faint filamentary emission seen in low-resolution (~30″) heavily confused AzTEC 1.1mm and SPIRE 0.5mm images. One of the 5 brightest ALMA sources is very dark in deep WFC3 and HAWK-I NIR images as well as VLA 1.4 GHz images, demonstrating that deep ALMA imaging can unveil new obscured star-forming galaxy population.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
Oak Ridge Spallation Neutron Source (ORSNS) target station design integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
McManamy, T.; Booth, R.; Cleaves, J.
1996-06-01
The conceptual design for a 1- to 3-MW short pulse spallation source with a liquid mercury target has been started recently. The design tools and methods being developed to define requirements, integrate the work, and provide early cost guidance will be presented with a summary of the current target station design status. The initial design point was selected with performance and cost estimate projections by a systems code. This code was developed recently using cost estimates from the Brookhaven Pulsed Spallation Neutron Source study and experience from the Advanced Neutron Source Project`s conceptual design. It will be updated and improvedmore » as the design develops. Performance was characterized by a simplified figure of merit based on a ratio of neutron production to costs. A work breakdown structure was developed, with simplified systems diagrams used to define interfaces and system responsibilities. A risk assessment method was used to identify potential problems, to identify required research and development (R&D), and to aid contingency development. Preliminary 3-D models of the target station are being used to develop remote maintenance concepts and to estimate costs.« less
Tian, Xing; Poeppel, David; Huber, David E.
2011-01-01
The open-source toolbox “TopoToolbox” is a suite of functions that use sensor topography to calculate psychologically meaningful measures (similarity, magnitude, and timing) from multisensor event-related EEG and MEG data. Using a GUI and data visualization, TopoToolbox can be used to calculate and test the topographic similarity between different conditions (Tian and Huber, 2008). This topographic similarity indicates whether different conditions involve a different distribution of underlying neural sources. Furthermore, this similarity calculation can be applied at different time points to discover when a response pattern emerges (Tian and Poeppel, 2010). Because the topographic patterns are obtained separately for each individual, these patterns are used to produce reliable measures of response magnitude that can be compared across individuals using conventional statistics (Davelaar et al. Submitted and Huber et al., 2008). TopoToolbox can be freely downloaded. It runs under MATLAB (The MathWorks, Inc.) and supports user-defined data structure as well as standard EEG/MEG data import using EEGLAB (Delorme and Makeig, 2004). PMID:21577268
MANAGING MICROBIAL CONTAMINATION IN URBAN WATERSHEDS
This paper presents different approaches for controlling pathogen contamination in urban watersheds for contamination resulting from point and diffuse sources. Point sources of pathogens can be treated by a disinfection technology of known effectiveness, and a desired reduction ...
MANAGING MICROBIAL CONTAMINATION IN URBAN WATERSHEDS
This paper presents different approaches for controlling pathogen contamination in urban watersheds for contamination resulting from point and diffuses sources. Point sources of pathogens can be treated by a disinfection technology of known effectiveness, and a desired reduction ...
Suo, An-ning; Wang, Tian-ming; Wang, Hui; Yu, Bo; Ge, Jian-ping
2006-12-01
Non-point sources pollution is one of main pollution modes which pollutes the earth surface environment. Aimed at soil water loss (a typical non-point sources pollution problem) on the Losses Plateau in China, the paper applied a landscape patternevaluation method to twelve watersheds of Jinghe River Basin on the Loess Plateau by means of location-weighted landscape contrast index(LCI) and landscape slope index(LSI). The result showed that LSI of farm land, low density grass land, forest land and LCI responded significantly to soil erosion modulus and responded to depth of runoff, while the relationship between these landscape index and runoff variation index and erosion variation index were not statistically significant. This tell us LSI and LWLCI are good indicators of soil water loss and thus have big potential in non-point source pollution risk evaluation.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
Inland area contingency plan and maps for Pennsylvania (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
EPA Region III has assembled on this CD a multitude of environmental data, in both visual and textual formats. While targeted for Facility Response Planning under the Oil Pollution Act of 1990, this information will prove helpful to anyone in the environmental arena. Specifically, the CD will aid contingency planning and emergency response personnel. Combining innovative GIS technology with EPA`s state-specific data allows you to display maps, find and identify map features, look at tabular information about map features, and print out maps. The CD was designed to be easy to use and incorporates example maps as well as helpmore » sections describing the use of the environmental data on the CD, and introduces you to the IACP Viewer and its capabilities. These help features will make it easy for you to conduct analysis, produce maps, and browse the IACP Plan. The IACP data are included in two formats: shapefiles, which can be viewed with the IACP Viewer or ESRI`s ArcView software (Version 2.1 or higher), and ARC/INFO export files, which can be imported into ARC/INFO or converted to other GIS data formats. Point Data Sources: Sensitive Areas, Surface Drinking Water Intakes, Groundwater Intakes, Groundwater Supply Facilities, NPL (National Priority List) Sites, FRP (Facility Response Plan) Facilities, NPDES (National Pollutant Discharge Elimination System) Facilities, Hospitals, RCRA (Resource Conservation and Recovery Act) Sites, TRI (Toxic Release Inventory) Sites, CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act) Sites Line Data Sources: TIGER Roads, TIGER Railroads, TIGER Hydrography, Pipelines Polygon Data Sources: State Boundaries, County Boundaries, Watershed Boundaries (8-digit HUC), TIGER Hydrography, Public Lands, Populated Places, IACP Boundaries, Coast Guard Boundaries, Forest Types, US Congressional Districts, One-half Mile Buffer of Surface Drinking Water Intakes.« less
Inland area contingency plan and maps for Virginia (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-12-01
EPA Region III has assembled on this CD a multitude of environmental data, in both visual and textual formats. While targeted for Facility Response Planning under the Oil Pollution Act of 1990, this information will prove helpful to anyone in the environmental arena. Specifically, the CD will aid contingency planning and emergency response personnel. Combining innovative GIS technology with EPA`s state-specific data allows you to display maps, find and identify map features, look at tabular information about map features, and print out maps. The CD was designed to be easy to use and incorporates example maps as well as helpmore » sections describing the use of the environmental data on the CD, and introduces you to the IACP Viewer and its capabilities. These help features will make it easy for you to conduct analysis, produce maps, and browse the IACP Plan. The IACP data are included in two formats: shapefiles, which can be viewed with the IACP Viewer or ESRI`s ArcView software (Version 2.1 or higher), and ARC/INFO export files, which can be imported into ARC/INFO or converted to other GIS data formats. Point Data Sources: Sensitive Areas, Surface Drinking Water Intakes, Groundwater Intakes, Groundwater Supply Facilities, NPL (National Priority List) Sites, FRP (Facility Response Plan) Facilities, NPDES (National Pollutant Discharge Elimination System) Facilities, Hospitals, RCRA (Resource Conservation and Recovery Act) Sites, TRI (Toxic Release Inventory) Sites, CERCLA (Comprehensive Environmental Response, Compensation, and Liability Act) Sites Line Data Sources: TIGER Roads, TIGER Railroads, TIGER Hydrography, Pipelines Polygon Data Sources: State Boundaries, County Boundaries, Watershed Boundaries (8-digit HUC), TIGER Hydrography, Public Lands, Populated Places, IACP Boundaries, Coast Guard Boundaries, Forest Types, US Congressional Districts, One-half Mile Buffer of Surface Drinking Water Intakes.« less
Determination of acoustical transfer functions using an impulse method
NASA Astrophysics Data System (ADS)
MacPherson, J.
1985-02-01
The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.
Remotely measuring populations during a crisis by overlaying two data sources.
Bharti, Nita; Lu, Xin; Bengtsson, Linus; Wetter, Erik; Tatem, Andrew J
2015-03-01
Societal instability and crises can cause rapid, large-scale movements. These movements are poorly understood and difficult to measure but strongly impact health. Data on these movements are important for planning response efforts. We retrospectively analyzed movement patterns surrounding a 2010 humanitarian crisis caused by internal political conflict in Côte d'Ivoire using two different methods. We used two remote measures, nighttime lights satellite imagery and anonymized mobile phone call detail records, to assess average population sizes as well as dynamic population changes. These data sources detect movements across different spatial and temporal scales. The two data sources showed strong agreement in average measures of population sizes. Because the spatiotemporal resolution of the data sources differed, we were able to obtain measurements on long- and short-term dynamic elements of populations at different points throughout the crisis. Using complementary, remote data sources to measure movement shows promise for future use in humanitarian crises. We conclude with challenges of remotely measuring movement and provide suggestions for future research and methodological developments. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.
Nitrate concentrations under irrigated agriculture
Zaporozec, A.
1983-01-01
In recent years, considerable interest has been expressed in the nitrate content of water supplies. The most notable toxic effect of nitrate is infant methemoglobinemia. The risk of this disease increases significantly at nitrate-nitrogen levels exceeding 10 mg/l. For this reason, this concentration has been established as a limit for drinking water in many countries. In natural waters, nitrate is a minor ionic constituent and seldom accounts for more than a few percent of the total anions. However, nitrate in a significant concentration may occur in the vicinity of some point sources such as septic tanks, manure pits, and waste-disposal sites. Non-point sources contributing to groundwater pollution are numerous and a majority of them are related to agricultural activities. The largest single anthropogenic input of nitrate into the groundwater is fertilizer. Even though it has not been proven that nitrogen fertilizers are responsible for much of nitrate pollution, they are generally recognized as the main threat to groundwater quality, especially when inefficiently applied to irrigated fields on sandy soils. The biggest challenge facing today's agriculture is to maintain the balance between the enhancement of crop productivity and the risk of groundwater pollution. ?? 1982 Springer-Verlag New York Inc.
An Integrated Chemical Environment to Support 21st-Century Toxicology.
Bell, Shannon M; Phillips, Jason; Sedykh, Alexander; Tandon, Arpit; Sprankle, Catherine; Morefield, Stephen Q; Shapiro, Andy; Allen, David; Shah, Ruchir; Maull, Elizabeth A; Casey, Warren M; Kleinstreuer, Nicole C
2017-05-25
SUMMARY : Access to high-quality reference data is essential for the development, validation, and implementation of in vitro and in silico approaches that reduce and replace the use of animals in toxicity testing. Currently, these data must often be pooled from a variety of disparate sources to efficiently link a set of assay responses and model predictions to an outcome or hazard classification. To provide a central access point for these purposes, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods developed the Integrated Chemical Environment (ICE) web resource. The ICE data integrator allows users to retrieve and combine data sets and to develop hypotheses through data exploration. Open-source computational workflows and models will be available for download and application to local data. ICE currently includes curated in vivo test data, reference chemical information, in vitro assay data (including Tox21 TM /ToxCast™ high-throughput screening data), and in silico model predictions. Users can query these data collections focusing on end points of interest such as acute systemic toxicity, endocrine disruption, skin sensitization, and many others. ICE is publicly accessible at https://ice.ntp.niehs.nih.gov. https://doi.org/10.1289/EHP1759.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
An Integrated Chemical Environment to Support 21st-Century Toxicology
Bell, Shannon M.; Phillips, Jason; Sedykh, Alexander; Tandon, Arpit; Sprankle, Catherine; Morefield, Stephen Q.; Shapiro, Andy; Allen, David; Shah, Ruchir; Maull, Elizabeth A.; Casey, Warren M.
2017-01-01
Summary: Access to high-quality reference data is essential for the development, validation, and implementation of in vitro and in silico approaches that reduce and replace the use of animals in toxicity testing. Currently, these data must often be pooled from a variety of disparate sources to efficiently link a set of assay responses and model predictions to an outcome or hazard classification. To provide a central access point for these purposes, the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods developed the Integrated Chemical Environment (ICE) web resource. The ICE data integrator allows users to retrieve and combine data sets and to develop hypotheses through data exploration. Open-source computational workflows and models will be available for download and application to local data. ICE currently includes curated in vivo test data, reference chemical information, in vitro assay data (including Tox21TM/ToxCast™ high-throughput screening data), and in silico model predictions. Users can query these data collections focusing on end points of interest such as acute systemic toxicity, endocrine disruption, skin sensitization, and many others. ICE is publicly accessible at https://ice.ntp.niehs.nih.gov. https://doi.org/10.1289/EHP1759 PMID:28557712
Enhanced gamma ray sensitivity in bismuth triiodide sensors through volumetric defect control
Johns, Paul M.; Baciak, James E.; Nino, Juan C.
2016-09-02
In some of the more attractive semiconducting compounds for ambient temperature radiation detector applications are impacted by low charge collection efficiency due to the presence of point and volumetric defects. This has been particularly true in the case of BiI 3, which features very attractive properties (density, atomic number, band gap, etc.) to serve as a gamma ray detector, but has yet to demonstrate its full potential. Here, we show that by applying growth techniques tailored to reduce defects, the spectral performance of this promising semiconductor can be realized. Gamma ray spectra from >100 keV source emissions are now obtainedmore » from high quality Sb:BiI 3 bulk crystals with limited concentrations of defects (point and extended). The spectra acquired in these high quality crystals feature photopeaks with resolution of 2.2% at 662 keV. Infrared microscopy is used to compare the local microstructure between radiation sensitive and non-responsive crystals. Our work demonstrates that BiI 3 can be prepared in melt-grown detector-grade samples with superior quality and can acquire the spectra from a variety of gamma ray sources.« less
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
NPDES (National Pollution Discharge & Elimination System) Minor Dischargers
As authorized by the Clean Water Act, the National Pollutant Discharge Elimination System (NPDES) permit program controls water pollution by regulating point sources that discharge pollutants into waters of the United States. The NPDES permit program regulates direct discharges from municipal and industrial wastewater treatment facilities that discharge directly into surface waters. The NPDES permit program is part of the Permit Compliance System (PCS) which issues, records, tracks, and regulates point source discharge facilities. Individual homes that are connected to a municipal system, use a septic system, or do not have a surface discharge do not need an NPDES permit. Facilities in PCS are identified as either major or minor. Within the major/minor classification, facilities are grouped into municipals or non-municipals. In many cases, non-municipals are industrial facilities. This data layer contains Minor dischargers. Major municipal dischargers include all facilities with design flows of greater than one million gallons per day; minor dischargers are less that one million gallons per day. Essentially, a minor discharger does not meet the discharge criteria for a major. Since its introduction in 1972, the NPDES permit program is responsible for significant improvements to our Nation's water quality.
Point focusing using loudspeaker arrays from the perspective of optimal beamforming.
Bai, Mingsian R; Hsieh, Yu-Hao
2015-06-01
Sound focusing is to create a concentrated acoustic field in the region surrounded by a loudspeaker array. This problem was tackled in the previous research via the Helmholtz integral approach, brightness control, acoustic contrast control, etc. In this paper, the same problem was revisited from the perspective of beamforming. A source array model is reformulated in terms of the steering matrix between the source and the field points, which lends itself to the use of beamforming algorithms such as minimum variance distortionless response (MVDR) and linearly constrained minimum variance (LCMV) originally intended for sensor arrays. The beamforming methods are compared with the conventional methods in terms of beam pattern, directional index, and control effort. Objective tests are conducted to assess the audio quality by using perceptual evaluation of audio quality (PEAQ). Experiments of produced sound field and listening tests are conducted in a listening room, with results processed using analysis of variance and regression analysis. In contrast to the conventional energy-based methods, the results have shown that the proposed methods are phase-sensitive in light of the distortionless constraint in formulating the array filters, which helps enhance audio quality and focusing performance.
Young and Old X-ray Binary and IXO Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Ptak, A.; Strickland, D.; Weaver, K.
2003-03-01
We have analyzed Chandra ACIS observations of 32 nearby spiral and elliptical galaxies and present the results of 1441 X-ray point sources, which are presumed to be mostly X-ray binaries (XRBs) and Intermediate-luminosity X-ray Objects (IXOs, a.k.a. ULXs). The X-ray luminosity functions (XLFs) of the point sources show that the slope of the elliptical galaxy XLFs are significantly steeper than the spiral galaxy XLFs, indicating grossly different types of point sources, or different stages in their evolution. Since the spiral galaxy XLF is so shallow, the most luminous points sources (usually the IXOs) dominate the total X-ray point source luminosity LXP. We show that the galaxy total B-band and K-band light (proxies for the stellar mass) are well correlated with LXP for both spirals and ellipticals, but the FIR and UV emission is only correlated for the spirals. We deconvolve LXP into two components, one that is proportional to the galaxy stellar mass (pop II), and another that is proportional to the galaxy SFR (pop I). We also note that IXOs (and nearly all of the other point sources) in both spirals and ellipticals have X-ray colors that are most consistent with power-law slopes of Gamma ˜ 1.5--3.0, which is inconsistent with high-mass XRBS (HMXBs). Thus, HMXBs are not important contributors to LXP. We have also found that IXOs in spiral galaxies may have a slightly harder X-ray spectrum than those in elliptical galaxies. The implications of these findings will be discussed.
Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States
Puckett, Larry J.
1994-01-01
Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.
32 CFR 806.20 - Records of non-U.S. government source.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a...-mail address of: their own FOIA office point of contact; the Air Force record OPR point of contact, the... 32 National Defense 6 2010-07-01 2010-07-01 false Records of non-U.S. government source. 806.20...
32 CFR 806.20 - Records of non-U.S. government source.
Code of Federal Regulations, 2011 CFR
2011-07-01
... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a...-mail address of: their own FOIA office point of contact; the Air Force record OPR point of contact, the... 32 National Defense 6 2011-07-01 2011-07-01 false Records of non-U.S. government source. 806.20...
32 CFR 806.20 - Records of non-U.S. government source.
Code of Federal Regulations, 2013 CFR
2013-07-01
... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a...-mail address of: their own FOIA office point of contact; the Air Force record OPR point of contact, the... 32 National Defense 6 2013-07-01 2013-07-01 false Records of non-U.S. government source. 806.20...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 5 Table 5 to Subpart III of Part 63—Compliance Requirements for Molded and Rebond Foam Production Affected Sources Emission point Emission point... Rebond Foam Production Affected Sources 5 Table 5 to Subpart III of Part 63 Protection of Environment...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 5 Table 5 to Subpart III of Part 63—Compliance Requirements for Molded and Rebond Foam Production Affected Sources Emission point Emission point... Rebond Foam Production Affected Sources 5 Table 5 to Subpart III of Part 63 Protection of Environment...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Flexible Polyurethane Foam Production Pt. 63, Subpt. III, Table 5 Table 5 to Subpart III of Part 63—Compliance Requirements for Molded and Rebond Foam Production Affected Sources Emission point Emission point... Rebond Foam Production Affected Sources 5 Table 5 to Subpart III of Part 63 Protection of Environment...
Kenow, Kevin P.; Ge, Zhongfu; Fara, Luke J.; Houdek, Steven C.; Lubinski, Brian R.
2016-01-01
Avian botulism type E is responsible for extensive waterbird mortality on the Great Lakes, yet the actual site of toxin exposure remains unclear. Beached carcasses are often used to describe the spatial aspects of botulism mortality outbreaks, but lack specificity of offshore toxin source locations. We detail methodology for developing a neural network model used for predicting waterbird carcass motions in response to wind, wave, and current forcing, in lieu of a complex analytical relationship. This empirically trained model uses current velocity, wind velocity, significant wave height, and wave peak period in Lake Michigan simulated by the Great Lakes Coastal Forecasting System. A detailed procedure is further developed to use the model for back-tracing waterbird carcasses found on beaches in various parts of Lake Michigan, which was validated using drift data for radiomarked common loon (Gavia immer) carcasses deployed at a variety of locations in northern Lake Michigan during September and October of 2013. The back-tracing model was further used on 22 non-radiomarked common loon carcasses found along the shoreline of northern Lake Michigan in October and November of 2012. The model-estimated origins of those cases pointed to some common source locations offshore that coincide with concentrations of common loons observed during aerial surveys. The neural network source tracking model provides a promising approach for identifying locations of botulinum neurotoxin type E intoxication and, in turn, contributes to developing an understanding of the dynamics of toxin production and possible trophic transfer pathways.
Woodchip bioreactors effectively treat aquaculture effluent
USDA-ARS?s Scientific Manuscript database
Nutrients, in particular nitrogen and phosphorus, can create eutrophication problems in any watershed. Preventing water quality impairment requires controlling nutrients from both point-source and non-point source discharges. Woodchip bioreactors are one relatively new approach that can be utilized ...
DoOR 2.0 - Comprehensive Mapping of Drosophila melanogaster Odorant Responses
NASA Astrophysics Data System (ADS)
Münch, Daniel; Galizia, C. Giovanni
2016-02-01
Odors elicit complex patterns of activated olfactory sensory neurons. Knowing the complete olfactome, i.e. the responses in all sensory neurons for all relevant odorants, is desirable to understand olfactory coding. The DoOR project combines all available Drosophila odorant response data into a single consensus response matrix. Since its first release many studies were published: receptors were deorphanized and several response profiles were expanded. In this study, we add unpublished data to the odor-response profiles for four odorant receptors (Or10a, Or42b, Or47b, Or56a). We deorphanize Or69a, showing a broad response spectrum with the best ligands including 3-hydroxyhexanoate, alpha-terpineol, 3-octanol and linalool. We include all of these datasets into DoOR, provide a comprehensive update of both code and data, and new tools for data analyses and visualizations. The DoOR project has a web interface for quick queries (http://neuro.uni.kn/DoOR), and a downloadable, open source toolbox written in R, including all processed and original datasets. DoOR now gives reliable odorant-responses for nearly all Drosophila olfactory responding units, listing 693 odorants, for a total of 7381 data points.
ter Waarbeek, Henriëtte L G; Dukers-Muijrers, Nicole H T M; Vennema, Harry; Hoebe, Christian J P A
2010-03-01
A cross-border gastroenteritis outbreak at a scouting camp was associated with drinking water from a farmer's well. A retrospective cohort study was performed to identify size and source of the outbreak, as well as other characteristics. Epidemiological investigation included standardized questionnaires about sex, age, risk exposures, illness and family members. Stool and water (100mL) samples were analyzed for bacteria, viruses and parasites. Questionnaires were returned by 84 scouts (response rate 82%), mean age of 13 years. The primary attack rate was 85% (diarrhoea and/or vomiting). Drinking water was the strongest independent risk factor showing a dose-response effect with 50%, 75%, 75%, 93% and 96% case prevalence for 0, 1, 2-3, 4-5 and >5 glasses consumed, respectively. Norovirus (GI.2 Southampton and GII.7 Leeds) was detected in 51 stool specimens (75%) from ill scouts. Water analysis showed fecal contamination, but no norovirus. The secondary attack rate was 20%. This remarkable outbreak was caused by a point-source infection with two genogroups of noroviruses most likely transmitted by drinking water from a well. Finding a dose-response relationship was striking. Specific measures to reduce the risk of waterborne diseases, outbreak investigation and a good international public health network are important.
Multiagency Urban Search Experiment Detector and Algorithm Test Bed
NASA Astrophysics Data System (ADS)
Nicholson, Andrew D.; Garishvili, Irakli; Peplow, Douglas E.; Archer, Daniel E.; Ray, William R.; Swinney, Mathew W.; Willis, Michael J.; Davidson, Gregory G.; Cleveland, Steven L.; Patton, Bruce W.; Hornback, Donald E.; Peltz, James J.; McLean, M. S. Lance; Plionis, Alexander A.; Quiter, Brian J.; Bandstra, Mark S.
2017-07-01
In order to provide benchmark data sets for radiation detector and algorithm development, a particle transport test bed has been created using experimental data as model input and validation. A detailed radiation measurement campaign at the Combined Arms Collective Training Facility in Fort Indiantown Gap, PA (FTIG), USA, provides sample background radiation levels for a variety of materials present at the site (including cinder block, gravel, asphalt, and soil) using long dwell high-purity germanium (HPGe) measurements. In addition, detailed light detection and ranging data and ground-truth measurements inform model geometry. This paper describes the collected data and the application of these data to create background and injected source synthetic data for an arbitrary gamma-ray detection system using particle transport model detector response calculations and statistical sampling. In the methodology presented here, HPGe measurements inform model source terms while detector response calculations are validated via long dwell measurements using 2"×4"×16" NaI(Tl) detectors at a variety of measurement points. A collection of responses, along with sampling methods and interpolation, can be used to create data sets to gauge radiation detector and algorithm (including detection, identification, and localization) performance under a variety of scenarios. Data collected at the FTIG site are available for query, filtering, visualization, and download at muse.lbl.gov.
Frequency-response mismatch effects in Johnson noise thermometry
NASA Astrophysics Data System (ADS)
White, D. R.; Qu, J.-F.
2018-02-01
Johnson noise thermometry is of considerable interest at present due to the planned redefinition of the kelvin in 2019, and several determinations of the Boltzmann constant have recently been published in support of the redefinition. To determine the Boltzmann constant by noise thermometry, the thermal noise from a sensing resistor at the triple point of water is compared to a pseudo-random noise with a calculable power spectral density traceable to quantum electrical standards. In all the measurements to date, the two dominant sources of measurement uncertainty are strongly influenced by a single factor: the frequency-response mismatch between the sets of leads connecting the thermometer to the two noise sources. In the most recent determination at the National Institute of Metrology, China, substantial changes were made to the connecting leads to reduce the mismatch effects. The aims of this paper are, firstly, to describe and explain the rationale for the changes, and secondly, to better understand the effects of the least-squares fits and the bias-variance compromise in the analysis of measurements affected by the mismatch effects. While significant improvements can be made to the connecting leads to lessen the effects of the frequency-response mismatch, the efforts are unlikely to be rewarded by a significant increase in bandwidth or a significant reduction in uncertainty.
Understanding and Using the Fermi Science Tools
NASA Astrophysics Data System (ADS)
Asercion, Joseph
2018-01-01
The Fermi Science Support Center (FSSC) provides information, documentation, and tools for the analysis of Fermi science data, including both the Large-Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). Source and binary versions of the Fermi Science Tools can be downloaded from the FSSC website, and are supported on multiple platforms. An overview document, the Cicerone, provides details of the Fermi mission, the science instruments and their response functions, the science data preparation and analysis process, and interpretation of the results. Analysis Threads and a reference manual available on the FSSC website provide the user with step-by-step instructions for many different types of data analysis: point source analysis - generating maps, spectra, and light curves, pulsar timing analysis, source identification, and the use of python for scripting customized analysis chains. We present an overview of the structure of the Fermi science tools and documentation, and how to acquire them. We also provide examples of standard analyses, including tips and tricks for improving Fermi science analysis.
Temporal and frequency characteristics of a narrow light beam in sea water.
Luchinin, Alexander G; Kirillin, Mikhail Yu
2016-09-20
The structure of a light field in sea water excited by a unidirectional point-sized pulsed source is studied by Monte Carlo technique. The pulse shape registered at the distances up to 120 m from the source on the beam axis and in its axial region is calculated with a time resolution of 1 ps. It is shown that with the increase of the distance from the source the pulse splits into two parts formed by components of various scattering orders. Frequency and phase responses of the beam are calculated by means of the fast Fourier transform. It is also shown that for higher frequencies, the attenuation of harmonic components of the field is larger. In the range of parameters corresponding to pulse splitting on the beam axis, the attenuation of harmonic components in particular spectral ranges exceeds the attenuation predicted by Bouguer law. In this case, the transverse distribution of the amplitudes of these harmonics is minimal on the beam axis.
An integral equation formulation for the diffraction from convex plates and polyhedra.
Asheim, Andreas; Svensson, U Peter
2013-06-01
A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.
Absolute Calibration of the AXAF Telescope Effective Area
NASA Technical Reports Server (NTRS)
Kellogg, E.; Cohen, L.; Edgar, R.; Evans, I.; Freeman, M.; Gaetz, T.; Jerius, D.; McDermott, W. C.; McKinnon, P.; Murray, S.;
1997-01-01
The prelaunch calibration of AXAF encompasses many aspects of the telescope. In principle, all that is needed is the complete point response function. This is, however, a function of energy, off-axis angle of the source, and operating mode of the facility. No single measurement would yield the entire result. Also, any calibration made prior to launch will be affected by changes in conditions after launch, such as the change from one g to zero g. The reflectivity of the mirror and perhaps even the detectors can change as well, for example by addition or removal of small amounts of material deposited on their surfaces. In this paper, we give a broad view of the issues in performing such a calibration, and discuss how they are being addressed in prelaunch preparation of AXAF. As our title indicates, we concentrate here on the total throughput of the observatory. This can be thought of as the integral of the point response function, i.e. the encircled energy, out ot the largest practical solid angle for an observation. Since there is no standard x-ray source in the sky whose flux is known to the -1% accuracy we are trying to achieve, we must do this calibration on the ground. we also must provide a means for monitoring any possible changes in this calibration from pre-launch until on-orbit operation can transfer the calibration to a celestial x-ray source whose emission is stable. In this paper, we analyze the elements of the absolute throughput calibration, which we call Effective Area. We review the requirements for calibrations of components or subsystems of the AXAF facility, including mirror, detectors, and gratings. We show how it is necessary to calibrate this ground-based detection system at standard man-made x-ray sources, such as electron storage rings. We present the status of all these calibrations, with indications of the measurements remaining to be done, even though the measurements on the AXAF flight optics and detectors will have been completed by the time this paper is presented. We evaluate progress toward the goal of making 1% measurements of the absolute x-ray flux from astrophysical sources, so that comparisons can be made with their emission at other wavelengths, in support of observations such as the Sunyaev-Zeldovitch effect, which can give absolute distance measurements independent of the traditional distance measuring techniques in astronomy.
Quiamzade, Alain; Mugny, Gabriel; Darnon, Céline
2009-03-01
Previous research has shown that low competence sources, compared to highly competent sources, can exert influence in aptitudes tasks in as much as they induce people to focus on the task and to solve it more deeply. Two experiments aimed at testing the coordination between self and source's problem solving strategies as a main explanation of such a difference in influence. The influence of a low versus high competence source has been examined in an anagram task that allows for distinguishing between three response strategies, including one that corresponds to the coordination between the source's strategy and participants' own strategy. In Study 1 the strategy suggested by the source was either relevant and useful or irrelevant and useless for solving the task. Results indicated that participants used the coordination strategy in a larger extend when they had been confronted to a low competence rather than a highly competent source but only when the source displayed a strategy that was useful to solve the task. In Study 2 the source's strategy was always relevant and useful, but a decentring procedure was introduced for half of the participants. This procedure induced participants to consider other points of view than their own. Results replicated the difference observed in Study 1 when no decentring was introduced. The difference however disappeared when decentring was induced, because of an increase of the high competence source's influence. These results highlight coordination of strategies as one mechanism underlying influence from low competence sources.
Sadeghi, Mohammad Hosein; Sina, Sedigheh; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-02-01
The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy.
Alberti, Luca; Colombo, Loris; Formentin, Giovanni
2018-04-15
The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bliss, Donald; Franzoni, Linda; Rouse, Jerry; Manning, Ben
2005-09-01
An analysis method for time-dependent broadband diffuse sound fields in enclosures is described. Beginning with a formulation utilizing time-dependent broadband intensity boundary sources, the strength of these wall sources is expanded in a series in powers of an absorption parameter, thereby giving a separate boundary integral problem for each power. The temporal behavior is characterized by a Taylor expansion in the delay time for a source to influence an evaluation point. The lowest-order problem has a uniform interior field proportional to the reciprocal of the absorption parameter, as expected, and exhibits relatively slow exponential decay. The next-order problem gives a mean-square pressure distribution that is independent of the absorption parameter and is primarily responsible for the spatial variation of the reverberant field. This problem, which is driven by input sources and the lowest-order reverberant field, depends on source location and the spatial distribution of absorption. Additional problems proceed at integer powers of the absorption parameter, but are essentially higher-order corrections to the spatial variation. Temporal behavior is expressed in terms of an eigenvalue problem, with boundary source strength distributions expressed as eigenmodes. Solutions exhibit rapid short-time spatial redistribution followed by long-time decay of a predominant spatial mode.
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S
2018-02-01
Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.
Rong-Mullins, Xiaoqing; Ayers, Michael C.; Summers, Mahmoud; Gallagher, Jennifer E. G.
2017-01-01
Cellular metabolism can change the potency of a chemical’s tumorigenicity. 4-nitroquinoline-1-oxide (4NQO) is a tumorigenic drug widely used on animal models for cancer research. Polymorphisms of the transcription factor Yrr1 confer different levels of resistance to 4NQO in Saccharomyces cerevisiae. To study how different Yrr1 alleles regulate gene expression leading to resistance, transcriptomes of three isogenic S. cerevisiae strains carrying different Yrr1 alleles were profiled via RNA sequencing (RNA-Seq) and chromatin immunoprecipitation coupled with sequencing (ChIP-Seq) in the presence and absence of 4NQO. In response to 4NQO, all alleles of Yrr1 drove the expression of SNQ2 (a multidrug transporter), which was highest in the presence of 4NQO resistance-conferring alleles, and overexpression of SNQ2 alone was sufficient to overcome 4NQO-sensitive growth. Using shape metrics to refine the ChIP-Seq peaks, Yrr1 strongly associated with three loci including SNQ2. In addition to a known Yrr1 target SNG1, Yrr1 also bound upstream of RPL35B; however, overexpression of these genes did not confer 4NQO resistance. RNA-Seq data also implicated nucleotide synthesis pathways including the de novo purine pathway, and the ribonuclease reductase pathways were downregulated in response to 4NQO. Conversion of a 4NQO-sensitive allele to a 4NQO-resistant allele by a single point mutation mimicked the 4NQO-resistant allele in phenotype, and while the 4NQO resistant allele increased the expression of the ADE genes in the de novo purine biosynthetic pathway, the mutant Yrr1 increased expression of ADE genes even in the absence of 4NQO. These same ADE genes were only increased in the wild-type alleles in the presence of 4NQO, indicating that the point mutation activated Yrr1 to upregulate a pathway normally only activated in response to stress. The various Yrr1 alleles also influenced growth on different carbon sources by altering the function of the mitochondria. Hence, the complement to 4NQO resistance was poor growth on nonfermentable carbon sources, which in turn varied depending on the allele of Yrr1 expressed in the isogenic yeast. The oxidation state of the yeast affected the 4NQO toxicity by altering the reactive oxygen species (ROS) generated by cellular metabolism. The integration of RNA-Seq and ChIP-Seq elucidated how Yrr1 regulates global gene transcription in response to 4NQO and how various Yrr1 alleles confer differential resistance to 4NQO. This study provides guidance for further investigation into how Yrr1 regulates cellular responses to 4NQO, as well as transcriptomic resources for further analysis of transcription factor variation on carbon source utilization. PMID:29208650
Impacts of the Detection of Cassiopeia A Point Source.
Umeda; Nomoto; Tsuruta; Mineshige
2000-05-10
Very recently the Chandra first light observation discovered a point-like source in the Cassiopeia A supernova remnant. This detection was subsequently confirmed by the analyses of the archival data from both ROSAT and Einstein observations. Here we compare the results from these observations with the scenarios involving both black holes (BHs) and neutron stars (NSs). If this point source is a BH, we offer as a promising model a disk-corona type model with a low accretion rate in which a soft photon source at approximately 0.1 keV is Comptonized by higher energy electrons in the corona. If it is an NS, the dominant radiation observed by Chandra most likely originates from smaller, hotter regions of the stellar surface, but we argue that it is still worthwhile to compare the cooler component from the rest of the surface with cooling theories. We emphasize that the detection of this point source itself should potentially provide enormous impacts on the theories of supernova explosion, progenitor scenario, compact remnant formation, accretion to compact objects, and NS thermal evolution.
Overview of on-farm bioremediation systems to reduce the occurrence of point source contamination.
De Wilde, Tineke; Spanoghe, Pieter; Debaer, Christof; Ryckeboer, Jaak; Springael, Dirk; Jaeken, Peter
2007-02-01
Contamination of ground and surface water puts pressure on the use of pesticides. Pesticide contamination of water can often be linked to point sources rather than to diffuse sources. Examples of such point sources are areas on farms where pesticides are handled and filled into sprayers, and where sprayers are cleaned. To reduce contamination from these point sources, different kinds of bioremediation system are being researched in various member states of the EU. Bioremediation is the use of living organisms, primarily microorganisms, to degrade the environmental contaminants into less toxic forms. The systems available for biocleaning of pesticides vary according to their shape and design. Up till now, three systems have been extensively described and reported: the biobed, the Phytobac and the biofilter. Most of these constructions are excavations or different sizes of container filled with biological material. Typical overall clean-up efficiency exceeds 95%, realising even more than 99% in many cases. This paper provides an overview of the state of the art of these bioremediation systems and discusses their construction, efficiency and drawbacks.
Code of Federal Regulations, 2012 CFR
2012-07-01
... POINT SOURCE CATEGORY Gum Rosin and Turpentine Subcategory § 454.22 Effluent limitations and guidelines... turpentine by a point source subject to the provisions of this paragraph after application of the best...
Code of Federal Regulations, 2013 CFR
2013-07-01
... POINT SOURCE CATEGORY Gum Rosin and Turpentine Subcategory § 454.22 Effluent limitations and guidelines... turpentine by a point source subject to the provisions of this paragraph after application of the best...
Code of Federal Regulations, 2014 CFR
2014-07-01
... POINT SOURCE CATEGORY Gum Rosin and Turpentine Subcategory § 454.22 Effluent limitations and guidelines... turpentine by a point source subject to the provisions of this paragraph after application of the best...
Re, V; Sacchi, E; Mas-Pla, J; Menció, A; El Amrani, N
2014-12-01
Groundwater pollution from anthropogenic sources is a serious concern affecting several coastal aquifers worldwide. Increasing groundwater exploitation, coupled with point and non-point pollution sources, are the main anthropogenic impacts on coastal environments and are responsible for severe health and food security issues. Adequate management strategies to protect groundwater from contamination and overexploitation are of paramount importance, especially in arid prone regions, where coastal aquifers often represent the main freshwater resource to sustain human needs. The Bou-Areg Aquifer (Morocco) is a perfect example of a coastal aquifer constantly exposed to all the negative externalities associated with groundwater use for agricultural purposes, which lead to a general increase in aquifer salinization. In this study data on 61 water samples, collected in June and November 2010, were used to: (i) track groundwater composition changes related to the use of irrigation water from different sources, (ii) highlight seasonal variations to assess aquifer vulnerability, and (iii) present a reproducible example of multi-tracer approach for groundwater management in rural coastal areas. Hydrogeochemical results show that Bou-Areg groundwater is characterized by - high salinity, associated with a remarkable increase in bicarbonate content in the crop growing season, due to more intense biological activity in irrigated soils. The coupled multi-tracer and statistical analysis confirms the strong dependency on irrigation activities as well as a clear identification of the processes governing the aquifer's hydrochemistry in the different seasons. Water Rock Interaction (WRI) dominates the composition of most of groundwater samples in the Low Irrigation season (L-IR) and Agricultural Return Flow (ARF) mainly affects groundwater salinization in the High Irrigation season (H-IR) in the same areas naturally affected by WRI. In the central part of the plain River Recharge (RR) from the Selouane River is responsible for the high groundwater salinity whilst Mixing Processes (MIX) occur in absence of irrigation activities. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Abbasi, R. U.; Abu-Zayyad, T.; Amann, J. F.; Archbold, G.; Atkins, R.; Bellido, J. A.; Belov, K.; Belz, J. W.; Ben-Zvi, S. Y.; Bergman, D. R.; Boyer, J. H.; Burt, G. W.; Cao, Z.; Clay, R. W.; Connolly, B. M.; Dawson, B. R.; Deng, W.; Farrar, G. R.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G. A.; Hüntemeyer, P.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Knapp, B. C.; Loh, E. C.; Maestas, M. M.; Manago, N.; Mannel, E. J.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Sasaki, M.; Schnetzer, S. R.; Seman, M.; Simpson, K. M.; Sinnis, G.; Smith, J. D.; Snow, R.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tupa, D.; Westerhoff, S.; Wiencke, L. R.; Zech, A.
2005-04-01
We present the results of a search for cosmic-ray point sources at energies in excess of 4.0×1019 eV in the combined data sets recorded by the Akeno Giant Air Shower Array and High Resolution Fly's Eye stereo experiments. The analysis is based on a maximum likelihood ratio test using the probability density function for each event rather than requiring an a priori choice of a fixed angular bin size. No statistically significant clustering of events consistent with a point source is found.
Taşeli, B K
2009-10-01
Köyceğiz Lake is located in the south-western part of Turkey. The area between the Köyceğiz Lake and the Mediterranean Sea is covered with four small lakes and several canals. The surroundings of the lake, canals and forests have a great potential as a reproduction areas for Mediterranean Sea turtles (Caretta caretta) and sheltering place for various animals. In the vicinity of this system there are agricultural areas and small settlements. In this region the most important economic activities are tourism and fisheries. However, the lake is currently threatened by pollution because of (1) non-point source pollution (agriculture); (2) point sources (land-based fish farms); (3) inefficient sewerage systems; (4) uncontrolled soil erosion in its drainage basin; (5) inappropriate flood control measures; and (6) channel traffic. This study evaluates the influence of its influent creeks namely Namnam and Yuvarlakçay Creek on the water quality of Köyceğiz Lake, mainly because the creeks are believed to be responsible for the major pollutant load reaching the lake. Accordingly, this study demonstrates (1) change in the water quality of Köyceğiz Lake from 2006 to 2007; (2) the water quality classification of the major influent creeks feeding Köyceğiz Lake; and (3) how land-based fish farm influences Yuvarlakçay Creek water quality in a Köyceğiz-Dalyan Specially Protected Area.
Sanz, M; Lopez-Bote, C J; Flores, A; Carmona, J M
2000-09-01
The aim of this experiment was to assess the effects of four different feeding programs designed to include tallow, a saturated fat at 0, 8, 12, and 28 d prior to slaughter on female broiler performance and the deposition, fatty acid profile, and melting point of abdominal fat. The following treatment groups were established according to dietary inclusion--from 21 to 49 d of age--of: sunflower oil (SUN), sunflower oil followed by tallow during the last 8 d (SUN + 8TALL), sunflower oil followed by tallow during the last 12 d (SUN + 12TALL), and tallow (TALL). The diets were designed to be isoenergetic and isonitrogenous. Abdominal fat deposition increased linearly with increasing number of days in which birds were fed the tallow-enriched diet. However, linear and quadratic response patterns were found between days before slaughter in which the birds were fed the tallow-enriched diet and abdominal fat melting points. This result suggested an exponential response in which 85% of the maximum level was already attained when the dietary fat type changed from an unsaturated to a saturated condition during the last 8 d of the feeding period. The use of an unsaturated fat source during the first stages of growth, and the substitution of a saturated fat for a few days before slaughter, may offer the advantage of lower abdominal fat deposition and an acceptable fat fluidity compared with the use of a saturated fat source during the whole growing and finishing period.
Performance Analysis of a Cost-Effective Electret Condenser Microphone Directional Array
NASA Technical Reports Server (NTRS)
Humphreys, William M., Jr.; Gerhold, Carl H.; Zuckerwar, Allan J.; Herring, Gregory C.; Bartram, Scott M.
2003-01-01
Microphone directional array technology continues to be a critical part of the overall instrumentation suite for experimental aeroacoustics. Unfortunately, high sensor cost remains one of the limiting factors in the construction of very high-density arrays (i.e., arrays containing several hundred channels or more) which could be used to implement advanced beamforming algorithms. In an effort to reduce the implementation cost of such arrays, the authors have undertaken a systematic performance analysis of a prototype 35-microphone array populated with commercial electret condenser microphones. An ensemble of microphones coupling commercially available electret cartridges with passive signal conditioning circuitry was fabricated for use with the Langley Large Aperture Directional Array (LADA). A performance analysis consisting of three phases was then performed: (1) characterize the acoustic response of the microphones via laboratory testing and calibration, (2) evaluate the beamforming capability of the electret-based LADA using a series of independently controlled point sources in an anechoic environment, and (3) demonstrate the utility of an electret-based directional array in a real-world application, in this case a cold flow jet operating at high subsonic velocities. The results of the investigation revealed a microphone frequency response suitable for directional array use over a range of 250 Hz - 40 kHz, a successful beamforming evaluation using the electret-populated LADA to measure simple point sources at frequencies up to 20 kHz, and a successful demonstration using the array to measure noise generated by the cold flow jet. This paper presents an overview of the tests conducted along with sample data obtained from those tests.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Norris, Jay P. (Technical Monitor)
2002-01-01
This project was awarded funding from the CGRO program to support ROSAT and ground-based observations of unidentified sources from data obtained by the EGRET instrument on the Compton Gamma-Ray Observatory. The critical items in the project are the individual ROSAT observations that are used to cover the 99% error circle of the unidentified EGRET source. Each error circle is a degree or larger in diameter. Each ROSAT field is about 30 deg in diameter. Hence, a number (>4) of ROSAT pointings must be obtained for each EGRET source to cover the field. The scheduling of ROSAT observations is carried out to maximize the efficiency of the total schedule. As a result, each pointing is broken into one or more sub-pointings of various exposure times. This project was awarded ROSAT observing time for four unidentified EGRET sources, summarized in the table. The column headings are defined as follows: 'Coverings' = number of observations to cover the error circle; 'SubPtg' = total number of sub-pointings to observe all of the coverings; 'Rec'd' = number of individual sub-pointings received to date; 'CompFlds' = number of individual coverings for which the requested complete exposure has been received. Processing of the data can not occur until a complete exposure has been accumulated for each covering.
An infrared sky model based on the IRAS point source data
NASA Technical Reports Server (NTRS)
Cohen, Martin; Walker, Russell; Wainscoat, Richard; Volk, Kevin; Walker, Helen; Schwartz, Deborah
1990-01-01
A detailed model for the infrared point source sky is presented that comprises geometrically and physically realistic representations of the galactic disk, bulge, spheroid, spiral arms, molecular ring, and absolute magnitudes. The model was guided by a parallel Monte Carlo simulation of the Galaxy. The content of the galactic source table constitutes an excellent match to the 12 micrometer luminosity function in the simulation, as well as the luminosity functions at V and K. Models are given for predicting the density of asteroids to be observed, and the diffuse background radiance of the Zodiacal cloud. The model can be used to predict the character of the point source sky expected for observations from future infrared space experiments.
June 13, 2013 U.S. East Coast Meteotsunami: Comparing a Numerical Model With Observations
NASA Astrophysics Data System (ADS)
Wang, D.; Becker, N. C.; Weinstein, S.; Whitmore, P.; Knight, W.; Kim, Y.; Bouchard, R. H.; Grissom, K.
2013-12-01
On June 13, 2013, a tsunami struck the U.S. East Coast and caused several reported injuries. This tsunami occurred after a derecho moved offshore from North America into the Atlantic Ocean. The presence of this storm, the lack of a seismic source, and the fact that tsunami arrival times at tide stations and deep ocean-bottom pressure sensors cannot be attributed to a 'point-source' suggest this tsunami was caused by atmospheric forces, i.e., a meteotsunami. In this study we attempt to reproduce the observed phenomenon using a numerical model with idealized atmospheric pressure forcing resembling the propagation of the observed barometric anomaly. The numerical model was able to capture some observed features of the tsunami at some tide stations, including the time-lag between the time of pressure jump and the time of tsunami arrival. The model also captures the response at a deep ocean-bottom pressure gauge (DART 44402), including the primary wave and the reflected wave. There are two components of the oceanic response to the propagating pressure anomaly, inverted barometer response and dynamic response. We find that the dynamic response over the deep ocean to be much smaller than the inverted barometer response. The time lag between the pressure jump and tsunami arrival at tide stations is due to the dynamic response: waves generated and/or reflected at the shelf-break propagate shoreward and amplify due to the shoaling effect. The evolution of the derecho over the deep ocean (propagation direction and intensity) is not well defined, however, because of the lack of data so the forcing used for this study is somewhat speculative. Better definition of the pressure anomaly through increased observation or high resolution atmospheric models would improve meteotsunami forecast capabilities.
KM3NeT/ARCA sensitivity to point-like neutrino sources
NASA Astrophysics Data System (ADS)
Trovato, A.;
2017-09-01
KM3NeT is network of deep-sea neutrino telescopes in the Mediterranean Sea aiming at the discovery of cosmic neutrino sources (ARCA) and the determination of the neutrino mass hierarchy (ORCA). The geographical location of KM3NeT in the Northern hemisphere allows to observe most of the Galactic Plane, including the Galactic Centre. Thanks to its good angular resolution, prime targets of KM3NeT/ARCA are point-like neutrino sources and in particular galactic sources.
An efficient method to compute microlensed light curves for point sources
NASA Technical Reports Server (NTRS)
Witt, Hans J.
1993-01-01
We present a method to compute microlensed light curves for point sources. This method has the general advantage that all microimages contributing to the light curve are found. While a source moves along a straight line, all micro images are located either on the primary image track or on the secondary image tracks (loops). The primary image track extends from - infinity to + infinity and is made of many sequents which are continuously connected. All the secondary image tracks (loops) begin and end on the lensing point masses. The method can be applied to any microlensing situation with point masses in the deflector plane, even for the overcritical case and surface densities close to the critical. Furthermore, we present general rules to evaluate the light curve for a straight track arbitrary placed in the caustic network of a sample of many point masses.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
Chandra Deep X-ray Observation of a Typical Galactic Plane Region and Near-Infrared Identification
NASA Technical Reports Server (NTRS)
Ebisawa, K.; Tsujimoto, M.; Paizis, A.; Hamaguichi, K.; Bamba, A.; Cutri, R.; Kaneda, H.; Maeda, Y.; Sato, G.; Senda, A.
2004-01-01
Using the Chandra Advanced CCD Imaging Spectrometer Imaging array (ACIS-I), we have carried out a deep hard X-ray observation of the Galactic plane region at (l,b) approx. (28.5 deg,0.0 deg), where no discrete X-ray source has been reported previously. We have detected 274 new point X-ray sources (4 sigma confidence) as well as strong Galactic diffuse emission within two partidly overlapping ACIS-I fields (approx. 250 sq arcmin in total). The point source sensitivity was approx. 3 x 10(exp -15)ergs/s/sq cm in the hard X-ray band (2-10 keV and approx. 2 x 10(exp -16) ergs/s/sq cm in the soft band (0.5-2 keV). Sum of all the detected point source fluxes account for only approx. 10 % of the total X-ray fluxes in the field of view. In order to explain the total X-ray fluxes by a superposition of fainter point sources, an extremely rapid increase of the source population is required below our sensitivity limit, which is hardly reconciled with any source distribution in the Galactic plane. Therefore, we conclude that X-ray emission from the Galactic plane has truly diffuse origin. Only 26 point sources were detected both in the soft and hard bands, indicating that there are two distinct classes of the X-ray sources distinguished by the spectral hardness ratio. Surface number density of the hard sources is only slightly higher than observed at the high Galactic latitude regions, strongly suggesting that majority of the hard X-ray sources are active galaxies seen through the Galactic plane. Following the Chandra observation, we have performed a near-infrared (NIR) survey with SOFI at ESO/NTT to identify these new X-ray sources. Since the Galactic plane is opaque in NIR, we did not see the background extragalactic sources in NIR. In fact, only 22 % of the hard sources had NIR counterparts which are most likely to be Galactic origin. Composite X-ray energy spectrum of those hard X-ray sources having NIR counterparts exhibits a narrow approx. 6.7 keV iron emission line, which is a signature of Galactic quiescent cataclysmic variables (CVs).
Two-point coherence of wave packets in turbulent jets
NASA Astrophysics Data System (ADS)
Jaunet, V.; Jordan, P.; Cavalieri, A. V. G.
2017-02-01
An experiment has been performed in order to provide support for wave-packet jet-noise modeling efforts. Recent work has shown that the nonlinear effects responsible for the two-point coherence of wave packets must be correctly accounted for if accurate sound prediction is to be achieved for subsonic turbulent jets. We therefore consider the same Mach 0.4 turbulent jet studied by Cavalieri et al. [Cavalieri et al., J. Fluid Mech. 730, 559 (2013), 10.1017/jfm.2013.346], but this time using two independent but synchronized, time-resolved stereo particle-image velocimetry systems. Each system can be moved independently, allowing simultaneous measurement of velocity in two, axially separated, crossflow planes, enabling eduction of the two-point coherence of wave packets. This and the associated length scales and phase speeds are studied and compared with those of the energy-containing turbulent eddies. The study illustrates how the two-point behavior of wave packets is fundamentally different from that of the more usually studied bulk two-point behavior, suggesting that sound-source modeling efforts should be reconsidered in the framework of wave packets. The study furthermore identifies two families of two-point-coherence behavior, respectively upstream and downstream of the end of the potential core, regions where linear theory is, respectively, successful and unsuccessful in predicting the axial evolution of wave-packets fluctuation energy.
Improving Photometric Calibration of Meteor Video Camera Systems.
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2017-09-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera band pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at ∼ 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to ∼ 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Improving Photometric Calibration of Meteor Video Camera Systems
NASA Technical Reports Server (NTRS)
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2017-01-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Office (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the first point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric flux within the camera bandpass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at approx. 0.20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0.05 - 0.10 mag in both filtered and unfiltered camera observations with no evidence for lingering systematics. These improvements are essential to accurately measuring photometric masses of individual meteors and source mass indexes.
Generation Mechanisms UV and X-ray Emissions During SL9 Impact
NASA Technical Reports Server (NTRS)
Waite, J. Hunter, Jr.
1997-01-01
The purpose of this grant was to study the ultraviolet and X-ray emissions associated with the impact of comet Shoemaker-Levy 9 with Jupiter. The University of Michigan task was primarily focused on theoretical calculations. The NAGW-4788 subtask was to be largely devoted to determining the constraints placed by the X-ray observations on the physical mechanisms responsible for the generation of the X-rays. Author summarized below the ROSAT observations and suggest a physical mechanism that can plausibly account for the observed emissions. It is hoped that the full set of activities can be completed at a later date. Further analysis of the ROSAT data acquired at the time of the impact was necessary to define the observational constraints on the magnetospheric-ionospheric processes involved in the excitation of the X-ray emissions associated with the fragment impacts. This analysis centered around improvements in the pointing accuracy and improvements in the timing information. Additional pointing information was made possible by the identification of the optical counterparts to the X-ray sources in the ROSAT field-of-view. Due to the large number of worldwide observers of the impacts, a serendipitous visible plate image from an observer in Venezuela provided a very accurate location of the present position of the X-ray source, virtually eliminating pointing errors in the data. Once refined, the pointing indicated that the two observed X-ray brightenings that were highly correlated in time with the K and P2 events were brightenings of the X-ray aurora (as identified in images prior to the impact).Appendix A "ROSAT observations of X-ray emissions from Jupiter during the impact of comet Shoemaker-Levy 9' also included.
NASA Astrophysics Data System (ADS)
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2015-09-01
The hybrid point-source/wave-field method is a newly proposed approach for Computer-Generated Hologram (CGH) calculation, based on the slicing of the scene into several depth layers parallel to the hologram plane. The complex wave scattered by each depth layer is then computed using either a wave-field or a point-source approach according to a threshold criterion on the number of points within the layer. Finally, the complex waves scattered by all the depth layers are summed up in order to obtain the final CGH. Although outperforming both point-source and wave-field methods without producing any visible artifact, this approach has not yet been used for animated holograms, and the possible exploitation of temporal redundancies has not been studied. In this paper, we propose a fast computation of video holograms by taking into account those redundancies. Our algorithm consists of three steps. First, intensity and depth data of the current 3D video frame are extracted and compared with those of the previous frame in order to remove temporally redundant data. Then the CGH pattern for this compressed frame is generated using the hybrid point-source/wave-field approach. The resulting CGH pattern is finally transmitted to the video output and stored in the previous frame buffer. Experimental results reveal that our proposed method is able to produce video holograms at interactive rates without producing any visible artifact.
Application of whole-body personal TL dosemeters in mixed field beta-gamma radiation.
Ciupek, K; Aksamit, D; Wołoszczuk, K
2014-11-01
Application of whole-body personal TL dosemeters based on a high-sensitivity LiF:Mg,Cu,P (MCP-N) in mixed field beta-gamma radiation has been characterised. The measurements were carried out with (90)Sr/(90)Y, (85)Kr and (137)Cs point sources to calculate the energy response and linearity of the TLD response in a dose range of 0.1-30 mSv. From the result, calibration curves were obtained, enabling the readout of individual dose equivalent Hp(10) from gamma radiation and Hp(0.07) from beta radiation in mixed field beta-gamma. Limitation of the methodology and its application are presented and discussed. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Systematic uncertainties in long-baseline neutrino-oscillation experiments
NASA Astrophysics Data System (ADS)
Ankowski, Artur M.; Mariani, Camillo
2017-05-01
Future neutrino-oscillation experiments are expected to bring definite answers to the questions of neutrino-mass hierarchy and violation of charge-parity symmetry in the lepton-sector. To realize this ambitious program it is necessary to ensure a significant reduction of uncertainties, particularly those related to neutrino-energy reconstruction. In this paper, we discuss different sources of systematic uncertainties, paying special attention to those arising from nuclear effects and detector response. By analyzing nuclear effects we show the importance of developing accurate theoretical models, capable of providing a quantitative description of neutrino cross sections, together with the relevance of their implementation in Monte Carlo generators and extensive testing against lepton-scattering data. We also point out the fundamental role of efforts aiming to determine detector responses in test-beam exposures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-12-01
The document contains a discussion of the technical rationale for effluent limitations guidelines for the Shipbuilding and Repair Point Source Category, and the control and treatment technologies which form the basis for these guidelines.
VizieR Online Data Catalog: First Fermi-LAT Inner Galaxy point source catalog (Ajello+, 2016)
NASA Astrophysics Data System (ADS)
Ajello, M.; Albert, A.; Atwood, W. B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Brandt, T. J.; Bregeon, J.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caputo, R.; Caragiulo, M.; Caraveo, P. A.; Cecchi, C.; Chekhtman, A.; Chiang, J.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Cominsky, L. R.; Conrad, J.; Cutini, S.; D'Ammando, F.; de Angelis, A.; de Palma, F.; Desiante, R.; di Venere, L.; Drell, P. S.; Favuzzi, C.; Ferrara, E. C.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Harding, A. K.; Hewitt, J. W.; Hill, A. B.; Horan, D.; Jogler, T.; Johannesson, G.; Johnson, A. S.; Kamae, T.; Karwin, C.; Knodlseder, J.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Li, L.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mayer, M.; Mazziotta, M. N.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Pesce-Rollins, M.; Piron, F.; Pivato, G.; Porter, T. A.; Raino, S.; Rando, R.; Razzano, M.; Reimer, A.; Reimer, O.; Ritz, S.; Sanchez-Conde, M.; Parkinson, P. M. S.; Sgro, C.; Siskind, E. J.; Smith, D. A.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Takahashi, H.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Vianello, G.; Winer, B. L.; Wood, K. S.; Zaharijas, G.; Zimmer, S.
2018-01-01
The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100GeV from a 15°x15° region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the γ-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner ~1kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15°x15° region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point Source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC are used to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM. (2 data files).
The feasibility of effluent trading in the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veil, J.A.
1997-09-01
In January 1996, the U.S. Environmental Protection Agency (EPA) released a policy statement endorsing wastewater effluent trading in watersheds, hoping to promote additional interest in the subject. The policy describes five types of effluent trades - point source/point source, point source/nonpoint source, pretreatment, intraplant, and nonpoint source/nonpoint source. This paper evaluates the feasibility of effluent trading for facilities in the oil and gas industry. The evaluation leads to the conclusion that potential for effluent trading is very low in the exploration and production and distribution and marketing sectors; trading potential is moderate for the refining sector except for intraplant trades,more » for which the potential is high. Good potential also exists for other types of water-related trades that do not directly involve effluents (e.g., wetlands mitigation banking). The potential for effluent trading in the energy industries and in other sectors would be enhanced if Congress amended the Clean Water Act (CWA) to formally authorize such trading.« less
The potential for effluent trading in the energy industries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veil, J. A.; Environmental Assessment
1998-01-01
In January 1996, the US Environmental Protection Agency (EPA) released a policy statement endorsing wastewater effluent trading in watersheds, hoping to promote additional interest in the subject. The policy describes five types of effluent trades: point source/point source, point source/nonpoint source, pretreatment, intraplant and nonpoint source/nonpoint source. This paper evaluates the feasibility of implementing these types of effluent trading for facilities in the oil and gas, electric power and coal industries. This paper finds that the potential for effluent trading in these industries is limited because trades would generally need to involve toxic pollutants, which can only be traded undermore » a narrow range of circumstances. However, good potential exists for other types of water-related trades that do not directly involve effluents (e.g. wetlands mitigation banking and voluntary environmental projects). The potential for effluent trading in the energy industries and in other sectors would be enhanced if Congress amended the Clean Water Act (CWA) to formally authorize such trading.« less
Determining the Intensity of a Point-Like Source Observed on the Background of AN Extended Source
NASA Astrophysics Data System (ADS)
Kornienko, Y. V.; Skuratovskiy, S. I.
2014-12-01
The problem of determining the time dependence of intensity of a point-like source in case of atmospheric blur is formulated and solved by using the Bayesian statistical approach. A pointlike source is supposed to be observed on the background of an extended source with constant in time though unknown brightness. The equation system for optimal statistical estimation of the sequence of intensity values in observation moments is obtained. The problem is particularly relevant for studying gravitational mirages which appear while observing a quasar through the gravitational field of a far galaxy.
Monitor-based evaluation of pollutant load from urban stormwater runoff in Beijing.
Liu, Y; Che, W; Li, J
2005-01-01
As a major pollutant source to urban receiving waters, the non-point source pollution from urban runoff needs to be well studied and effectively controlled. Based on monitoring data from urban runoff pollutant sources, this article describes a systematic estimation of total pollutant loads from the urban areas of Beijing. A numerical model was developed to quantify main pollutant loads of urban runoff in Beijing. A sub-procedure is involved in this method, in which the flush process influences both the quantity and quality of stormwater runoff. A statistics-based method was applied in computing the annual pollutant load as an output of the runoff. The proportions of pollutant from point-source and non-point sources were compared. This provides a scientific basis for proper environmental input assessment of urban stormwater pollution to receiving waters, improvement of infrastructure performance, implementation of urban stormwater management, and utilization of stormwater.
NASA Astrophysics Data System (ADS)
Golubev, S.; Skalyga, V.; Izotov, I.; Sidorov, A.
2017-02-01
A possibility of a compact powerful point-like neutron source creation is discussed. Neutron yield of the source based on deuterium-deuterium (D-D) reaction is estimated at the level of 1011 s-1 (1013 s-1 for deuterium-tritium reaction). The fusion takes place due to bombardment of deuterium- (or tritium) loaded target by high-current focused deuterium ion beam with energy of 100 keV. The ion beam is formed by means of high-current quasi-gasdynamic ion source of a new generation based on an electron cyclotron resonance (ECR) discharge in an open magnetic trap sustained by powerful microwave radiation. The prospects of proposed generator for neutron tomography are discussed. Suggested method is compared to the point-like neutron sources based on a spark produced by powerful femtosecond laser pulses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adrián-Martínez, S.; Ardid, M.; Albert, A.
2016-05-20
We present the results of searches for point-like sources of neutrinos based on the first combined analysis of data from both the ANTARES and IceCube neutrino telescopes. The combination of both detectors, which differ in size and location, forms a window in the southern sky where the sensitivity to point sources improves by up to a factor of 2 compared with individual analyses. Using data recorded by ANTARES from 2007 to 2012, and by IceCube from 2008 to 2011, we search for sources of neutrino emission both across the southern sky and from a preselected list of candidate objects. Nomore » significant excess over background has been found in these searches, and flux upper limits for the candidate sources are presented for E {sup −2.5} and E {sup −2} power-law spectra with different energy cut-offs.« less
Risk and responsibility in a manufactured world.
Pellizzoni, Luigi
2010-09-01
Recent criticisms of traditional understandings of risk, responsibility and the division of labour between science and politics build on the idea of the co-produced character of the natural and social orders, making a case for less ambitious and more inclusive policy processes, where questions of values and goals may be addressed together with questions of facts and means, causal liabilities and principled responsibilities. Within the neo-liberal political economy, however, the contingency of the world is depicted as a source of unprecedented opportunities for human craftsmanship, rather than of possibly unmanageable surprises. Gene technologies offer a vantage point for reflecting on the consequences of the drift from discovery to invention as a master frame in the appraisal of human intermingling with the world. Biotech patenting regulations carve out a sovereign agency which, by crafting nature, also crafts the distinction between the manufactured and non-manufactured world. Difficulties in ascribing responsibility stem as a consequence. It is likely that politics and economy can be democratized and responsibilities rearranged not by 'democratizing' knowledge production, but rather the reverse.
Shock response of poly[methyl methacrylate] (PMMA) measured with embedded electromagnetic gauges
NASA Astrophysics Data System (ADS)
Lacina, David; Neel, Christopher; Dattelbaum, Dana
2018-05-01
The shock response of poly[methyl methacrylate] (PMMA) acquired from two providers, Spartech and Rohm & Haas, has been measured to investigate the shock response variations related to material pedigree. These measurements have also been used to examine the effects of viscoelasticity on Spartech PMMA. Measurements of the Hugoniot curves, release wave speeds, and index of refraction have been acquired up to previously unexplored stresses, ˜10.7 GPa, for Spartech PMMA. In-situ, time-resolved particle velocity wave profiles, as a function of time and depth, were obtained using twelve separate electromagnetic gauge elements embedded at different depths in the PMMA. A comparison of the new data to the shock response data for Rohm and Haas PMMA, used as a "standard" material in shock compression studies, shows that there are no significant differences in shock response for the two materials. From the index of refraction measurements, the apparent particle velocity correction for a PMMA window exhibits an interesting oscillation, increasing at up = 0.3 km/s after decreasing up to that point. The results are generalized into guidelines for sourcing PMMA for use in shock studies.
Controllability of semi-infinite rod heating by a point source
NASA Astrophysics Data System (ADS)
Khurshudyan, A.
2018-04-01
The possibility of control over heating of a semi-infinite thin rod by a point source concentrated at an inner point of the rod, is studied. Quadratic and piecewise constant solutions of the problem are derived, and the possibilities of solving appropriate problems of optimal control are indicated. Determining of the parameters of the piecewise constant solution is reduced to a problem of nonlinear programming. Numerical examples are considered.
Kim, Marlene Thai; Huang, Ruili; Sedykh, Alexander; Wang, Wenyi; Xia, Menghang; Zhu, Hao
2016-05-01
Hepatotoxicity accounts for a substantial number of drugs being withdrawn from the market. Using traditional animal models to detect hepatotoxicity is expensive and time-consuming. Alternative in vitro methods, in particular cell-based high-throughput screening (HTS) studies, have provided the research community with a large amount of data from toxicity assays. Among the various assays used to screen potential toxicants is the antioxidant response element beta lactamase reporter gene assay (ARE-bla), which identifies chemicals that have the potential to induce oxidative stress and was used to test > 10,000 compounds from the Tox21 program. The ARE-bla computational model and HTS data from a big data source (PubChem) were used to profile environmental and pharmaceutical compounds with hepatotoxicity data. Quantitative structure-activity relationship (QSAR) models were developed based on ARE-bla data. The models predicted the potential oxidative stress response for known liver toxicants when no ARE-bla data were available. Liver toxicants were used as probe compounds to search PubChem Bioassay and generate a response profile, which contained thousands of bioassays (> 10 million data points). By ranking the in vitro-in vivo correlations (IVIVCs), the most relevant bioassay(s) related to hepatotoxicity were identified. The liver toxicants profile contained the ARE-bla and relevant PubChem assays. Potential toxicophores for well-known toxicants were created by identifying chemical features that existed only in compounds with high IVIVCs. Profiling chemical IVIVCs created an opportunity to fully explore the source-to-outcome continuum of modern experimental toxicology using cheminformatics approaches and big data sources. Kim MT, Huang R, Sedykh A, Wang W, Xia M, Zhu H. 2016. Mechanism profiling of hepatotoxicity caused by oxidative stress using antioxidant response element reporter gene assay models and big data. Environ Health Perspect 124:634-641; http://dx.doi.org/10.1289/ehp.1509763.
Kim, Marlene Thai; Huang, Ruili; Sedykh, Alexander; Wang, Wenyi; Xia, Menghang; Zhu, Hao
2015-01-01
Background: Hepatotoxicity accounts for a substantial number of drugs being withdrawn from the market. Using traditional animal models to detect hepatotoxicity is expensive and time-consuming. Alternative in vitro methods, in particular cell-based high-throughput screening (HTS) studies, have provided the research community with a large amount of data from toxicity assays. Among the various assays used to screen potential toxicants is the antioxidant response element beta lactamase reporter gene assay (ARE-bla), which identifies chemicals that have the potential to induce oxidative stress and was used to test > 10,000 compounds from the Tox21 program. Objective: The ARE-bla computational model and HTS data from a big data source (PubChem) were used to profile environmental and pharmaceutical compounds with hepatotoxicity data. Methods: Quantitative structure–activity relationship (QSAR) models were developed based on ARE-bla data. The models predicted the potential oxidative stress response for known liver toxicants when no ARE-bla data were available. Liver toxicants were used as probe compounds to search PubChem Bioassay and generate a response profile, which contained thousands of bioassays (> 10 million data points). By ranking the in vitro–in vivo correlations (IVIVCs), the most relevant bioassay(s) related to hepatotoxicity were identified. Results: The liver toxicants profile contained the ARE-bla and relevant PubChem assays. Potential toxicophores for well-known toxicants were created by identifying chemical features that existed only in compounds with high IVIVCs. Conclusion: Profiling chemical IVIVCs created an opportunity to fully explore the source-to-outcome continuum of modern experimental toxicology using cheminformatics approaches and big data sources. Citation: Kim MT, Huang R, Sedykh A, Wang W, Xia M, Zhu H. 2016. Mechanism profiling of hepatotoxicity caused by oxidative stress using antioxidant response element reporter gene assay models and big data. Environ Health Perspect 124:634–641; http://dx.doi.org/10.1289/ehp.1509763 PMID:26383846
GOES-R active vibration damping controller design, implementation, and on-orbit performance
NASA Astrophysics Data System (ADS)
Clapp, Brian R.; Weigl, Harald J.; Goodzeit, Neil E.; Carter, Delano R.; Rood, Timothy J.
2018-01-01
GOES-R series spacecraft feature a number of flexible appendages with modal frequencies below 3.0 Hz which, if excited by spacecraft disturbances, can be sources of undesirable jitter perturbing spacecraft pointing. To meet GOES-R pointing stability requirements, the spacecraft flight software implements an Active Vibration Damping (AVD) rate control law which acts in parallel with the nadir point attitude control law. The AVD controller commands spacecraft reaction wheel actuators based upon Inertial Measurement Unit (IMU) inputs to provide additional damping for spacecraft structural modes below 3.0 Hz which vary with solar wing angle. A GOES-R spacecraft dynamics and attitude control system identified model is constructed from pseudo-random reaction wheel torque commands and IMU angular rate response measurements occurring over a single orbit during spacecraft post-deployment activities. The identified Fourier model is computed on the ground, uplinked to the spacecraft flight computer, and the AVD controller filter coefficients are periodically computed on-board from the Fourier model. Consequently, the AVD controller formulation is based not upon pre-launch simulation model estimates but upon on-orbit nadir point attitude control and time-varying spacecraft dynamics. GOES-R high-fidelity time domain simulation results herein demonstrate the accuracy of the AVD identified Fourier model relative to the pre-launch spacecraft dynamics and control truth model. The AVD controller on-board the GOES-16 spacecraft achieves more than a ten-fold increase in structural mode damping for the fundamental solar wing mode while maintaining controller stability margins and ensuring that the nadir point attitude control bandwidth does not fall below 0.02 Hz. On-orbit GOES-16 spacecraft appendage modal frequencies and damping ratios are quantified based upon the AVD system identification, and the increase in modal damping provided by the AVD controller for each structural mode is presented. The GOES-16 spacecraft AVD controller frequency domain stability margins and nadir point attitude control bandwidth are presented along with on-orbit time domain disturbance response performance.
GOES-R Active Vibration Damping Controller Design, Implementation, and On-Orbit Performance
NASA Technical Reports Server (NTRS)
Clapp, Brian R.; Weigl, Harald J.; Goodzeit, Neil E.; Carter, Delano R.; Rood, Timothy J.
2017-01-01
GOES-R series spacecraft feature a number of flexible appendages with modal frequencies below 3.0 Hz which, if excited by spacecraft disturbances, can be sources of undesirable jitter perturbing spacecraft pointing. In order to meet GOES-R pointing stability requirements, the spacecraft flight software implements an Active Vibration Damping (AVD) rate control law which acts in parallel with the nadir point attitude control law. The AVD controller commands spacecraft reaction wheel actuators based upon Inertial Measurement Unit (IMU) inputs to provide additional damping for spacecraft structural modes below 3.0 Hz which vary with solar wing angle. A GOES-R spacecraft dynamics and attitude control system identified model is constructed from pseudo-random reaction wheel torque commands and IMU angular rate response measurements occurring over a single orbit during spacecraft post-deployment activities. The identified Fourier model is computed on the ground, uplinked to the spacecraft flight computer, and the AVD controller filter coefficients are periodically computed on-board from the Fourier model. Consequently, the AVD controller formulation is based not upon pre-launch simulation model estimates but upon on-orbit nadir point attitude control and time-varying spacecraft dynamics. GOES-R high-fidelity time domain simulation results herein demonstrate the accuracy of the AVD identified Fourier model relative to the pre-launch spacecraft dynamics and control truth model. The AVD controller on-board the GOES-16 spacecraft achieves more than a ten-fold increase in structural mode damping of the fundamental solar wing mode while maintaining controller stability margins and ensuring that the nadir point attitude control bandwidth does not fall below 0.02 Hz. On-orbit GOES-16 spacecraft appendage modal frequencies and damping ratios are quantified based upon the AVD system identification, and the increase in modal damping provided by the AVD controller for each structural mode is presented. The GOES-16 spacecraft AVD controller frequency domain stability margins and nadir point attitude control bandwidth are presented along with on-orbit time domain disturbance response performance.
Tong, Yindong; Bu, Xiaoge; Chen, Junyue; Zhou, Feng; Chen, Long; Liu, Maodian; Tan, Xin; Yu, Tao; Zhang, Wei; Mi, Zhaorong; Ma, Lekuan; Wang, Xuejun; Ni, Jing
2017-01-05
Based on a time-series dataset and the mass balance method, the contributions of various sources to the nutrient discharges from the Yangtze River to the East China Sea are identified. The results indicate that the nutrient concentrations vary considerably among different sections of the Yangtze River. Non-point sources are an important source of nutrients to the Yangtze River, contributing about 36% and 63% of the nitrogen and phosphorus discharged into the East China Sea, respectively. Nutrient inputs from non-point sources vary among the sections of the Yangtze River, and the contributions of non-point sources increase from upstream to downstream. Considering the rice growing patterns in the Yangtze River Basin, the synchrony of rice tillering and the wet seasons might be an important cause of the high nutrient discharge from the non-point sources. Based on our calculations, a reduction of 0.99Tg per year in total nitrogen discharges from the Yangtze River would be needed to limit the occurrences of harmful algal blooms in the East China Sea to 15 times per year. The extensive construction of sewage treatment plants in urban areas may have only a limited effect on reducing the occurrences of harmful algal blooms in the future. Copyright © 2016 Elsevier B.V. All rights reserved.
A model of the 8-25 micron point source infrared sky
NASA Technical Reports Server (NTRS)
Wainscoat, Richard J.; Cohen, Martin; Volk, Kevin; Walker, Helen J.; Schwartz, Deborah E.
1992-01-01
We present a detailed model for the IR point-source sky that comprises geometrically and physically realistic representations of the Galactic disk, bulge, stellar halo, spiral arms (including the 'local arm'), molecular ring, and the extragalactic sky. We represent each of the distinct Galactic components by up to 87 types of Galactic source, each fully characterized by scale heights, space densities, and absolute magnitudes at BVJHK, 12, and 25 microns. The model is guided by a parallel Monte Carlo simulation of the Galaxy at 12 microns. The content of our Galactic source table constitutes a good match to the 12 micron luminosity function in the simulation, as well as to the luminosity functions at V and K. We are able to produce differential and cumulative IR source counts for any bandpass lying fully within the IRAS Low-Resolution Spectrometer's range (7.7-22.7 microns as well as for the IRAS 12 and 25 micron bands. These source counts match the IRAS observations well. The model can be used to predict the character of the point source sky expected for observations from IR space experiments.
Double point source W-phase inversion: Real-time implementation and automated model selection
Nealy, Jennifer; Hayes, Gavin
2015-01-01
Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.
Differentiating Impacts of Watershed Development from Superfund Sites on Stream Macroinvertebrates
Urbanization effect models were developed and verified at whole watershed scales to predict and differentiate between effects on aquatic life from diffuse, non-point source (NPS) urbanization in the watershed and effects of known local, site-specific origin point sources, contami...
Better Assessment Science Integrating Point and Nonpoint Sources
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is not a model per se, but is a multipurpose environmental decision support system for use by regional, state, and local agencies in performing watershed- and water-quality-based studies. BASI...
Background/Question/Methods Bacterial pathogens in surface water present disease risks to aquatic communities and for human recreational activities. Sources of these pathogens include runoff from urban, suburban, and agricultural point and non-point sources, but hazardous micr...
Self-force on a point charge and linear source in the space of a screw dislocation
NASA Astrophysics Data System (ADS)
Azevedo, Sérgio; Moraes, Fernando
2000-03-01
Using a description of defect in solids in terms of three-dimensional gravity, we determine the eletrostatic self-force acting on a point teste charge and a linear source in the presence of a screw dislocation.
TMDLS: AFTER POINT SOURCES, WHAT CAN WE DO NEXT?
Section 303(d) of the Clean Water Act required TMDLs (total maximum daily loads) for all waters for which effluent or point source limitations are insufficient to meet water quality standards. Concerns may arise regarding the manner by which TMDLs are established, the corrective ...
NASA Astrophysics Data System (ADS)
Wang, H.; Chen, K.; Wu, Z.; Guan, X.
2017-12-01
In recent years, with the prominent of water environment problem and the relative increase of point source pollution governance, especially the agricultural non-point source pollution problem caused by the extensive use of fertilizers and pesticides has become increasingly aroused people's concern and attention. In order to reveal the quantitative relationship between agriculture water and fertilizer and non-point source pollution, on the basis of elm field experiment and combined with agricultural drainage irrigation model, the agricultural irrigation water and the relationship between fertilizer and fertilization scheme and non-point source pollution were analyzed and calculated by field emission intensity index. The results show that the variation of displacement varies greatly under different irrigation conditions. When the irrigation water increased from 22cm to 42cm, the irrigation water increased by 20 cm while the field displacement increased by 11.92 cm, about 66.22% of the added value of irrigation water. Then the irrigation water increased from 42 to 68, irrigation water increased 26 cm, and the field displacement increased by 22.48 cm, accounting for 86.46% of irrigation water. So there is an "inflection point" between the irrigation water amount and field displacement amount. The load intensity increases with the increase of irrigation water and shows a significant power correlation. Under the different irrigation condition, the increase amplitude of load intensity with the increase of irrigation water is different. When the irrigation water is smaller, the load intensity increase relatively less, and when the irrigation water increased to about 42 cm, the load intensity will increase considerably. In addition, there was a positive correlation between the fertilization and load intensity. The load intensity had obvious difference in different fertilization modes even with same fertilization level, in which the fertilizer field unit load intensity increased the most in July. The results provide some basis for the field control and management of agricultural non-point source pollution.
An X-ray investigation of the unusual supernova remnant CTB 80
NASA Technical Reports Server (NTRS)
Wang, Z. R.; Seward, F. D.
1984-01-01
The X-ray properties of SNR CTB 80 (G68.8 + 2.8) are discussed based on both low- and high-resolution images from the Einstein satellite. The X-ray maps show a point source coinciding with the region of maximum radio emission. Diffuse X-ray emission is evident mainly along the radio lobe extending about 8 arcmin east of the point source and aligned with the projected magnetic field lines. The observed X-ray luminosity is 3.2 x 10 to the 34th ergs/s with 1.0 x 10 to the 3th ergs/s from the point source (assuming a distance of 3 kpc). There is also faint, diffuse, X-ray emission south of the point source, where radio emission is absent. The unusual radio and X-ray morphologies are interpreted as a result of relativistic jets energized by the central object, and the possible association of CTB 80 with SN 1408 as recorded by Chinese observers is discussed.
First Neutrino Point-Source Results from the 22 String Icecube Detector
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Ice Cube Collaboration
2009-08-01
We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E -2 spectrum is E^{2} Φ_{ν_{μ}} < 1.4 × 10^{-11} TeV cm^{-2} s^{-1}, in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2.
Microbial Source Module (MSM): Documenting the Science ...
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consumed and produced by the MSM which is based on the HSPF (Bicknell et al., 1997) Bacterial Indicator Tool (EPA, 2013b, 2013c). Non-point sources include numbers, locations, and shedding rates of domestic agricultural animals (dairy and beef cows, swine, poultry, etc.) and wildlife (deer, duck, raccoon, etc.). Monthly maximum microbial storage and accumulation rates on the land surface, adjusted for die-off, are computed over an entire season for four land-use types (cropland, pasture, forest, and urbanized/mixed-use) for each subwatershed. Monthly point source microbial loadings to instream locations (i.e., stream segments that drain individual sub-watersheds) are combined and determined for septic systems, direct instream shedding by cattle, and POTWs/WWTPs (Publicly Owned Treatment Works/Wastewater Treatment Plants). The MSM functions within a larger modeling system that characterizes human-health risk resulting from ingestion of water contaminated with pathogens. The loading estimates produced by the MSM are input to the HSPF model that simulates flow and microbial fate/transport within a watershed. Microbial counts within recreational waters are then input to the MRA-IT model (Soller et
Fuerhacker, M
2003-01-01
Bisphenol A (BPA) is widely used for the production of epoxy resins and polycarbonate plastics and is considered an endocrine disruptor. Special in vitro test systems and animal experiments showed a weak estrogenic activity. Aquatic wildlife especially could be endangered by waste water discharges. To manage possible risks arising from BPA emissions the major fluxes need to be investigated and the sources of the contamination of municipal treatment plants need to be determined. In this study, five major industrial point sources, two different household areas and the influent and effluent of the corresponding treatment plant (WWTP) were monitored simultaneously at a plant serving 120,000 population equivalents. A paper producing plant was the major BPA contributor to the influent load of the wastewater treatment plant. All the other emissions from point sources, including the two household areas, were considerably lower. The minimum elimination rate in the WTTP could be determined at 78% with an average of 89% of the total BPA-load. For a possible pollution-forecast, or for a comparison between different point sources, emission factors based on COD-emissions were calculated for industrial and household point sources at BPA/COD-ratios between 1.4 x 10(-8) - 125 x 10(-8) and 1.3 x 10(-6) - 6.3 x 10(-6), respectively.
Sadeghi, Mohammad Hosein; Mehdizadeh, Amir; Faghihi, Reza; Moharramzadeh, Vahed; Meigooni, Ali Soleimani
2018-01-01
Purpose The dosimetry procedure by simple superposition accounts only for the self-shielding of the source and does not take into account the attenuation of photons by the applicators. The purpose of this investigation is an estimation of the effects of the tandem and ovoid applicator on dose distribution inside the phantom by MCNP5 Monte Carlo simulations. Material and methods In this study, the superposition method is used for obtaining the dose distribution in the phantom without using the applicator for a typical gynecological brachytherapy (superposition-1). Then, the sources are simulated inside the tandem and ovoid applicator to identify the effect of applicator attenuation (superposition-2), and the dose at points A, B, bladder, and rectum were compared with the results of superposition. The exact dwell positions, times of the source, and positions of the dosimetry points were determined in images of a patient and treatment data of an adult woman patient from a cancer center. The MCNP5 Monte Carlo (MC) code was used for simulation of the phantoms, applicators, and the sources. Results The results of this study showed no significant differences between the results of superposition method and the MC simulations for different dosimetry points. The difference in all important dosimetry points was found to be less than 5%. Conclusions According to the results, applicator attenuation has no significant effect on the calculated points dose, the superposition method, adding the dose of each source obtained by the MC simulation, can estimate the dose to points A, B, bladder, and rectum with good accuracy. PMID:29619061
Vieno, M; Dore, A J; Bealey, W J; Stevenson, D S; Sutton, M A
2010-01-15
An atmospheric transport-chemistry model is applied to investigate the effects of source configuration in simulating regional sulphur deposition footprints from elevated point sources. Dry and wet depositions of sulphur are calculated for each of the 69 largest point sources in the UK. Deposition contributions for each point source are calculated for 2003, as well as for a 2010 emissions scenario. The 2010 emissions scenario has been chosen to simulate the Gothenburg protocol emission scenario. Point source location is found to be a major driver of the dry/wet deposition ratio for each deposition footprint, with increased precipitation scavenging of SO(x) in hill areas resulting in a larger fraction of the emitted sulphur being deposited within the UK for sources located near these areas. This reduces exported transboundary pollution, but, associated with the occurrence of sensitive soils in hill areas, increases the domestic threat of soil acidification. The simulation of plume rise using individual stack parameters for each point source demonstrates a high sensitivity of SO(2) surface concentration to effective source height. This emphasises the importance of using site-specific information for each major stack, which is rarely included in regional atmospheric pollution models, due to the difficulty in obtaining the required input data. The simulations quantify how the fraction of emitted SO(x) exported from the UK increases with source magnitude, effective source height and easterly location. The modelled reduction in SO(x) emissions, between 2003 and 2010 resulted in a smaller fraction being exported, with the result that the reductions in SO(x) deposition to the UK are less than proportionate to the emission reduction. This non-linearity is associated with a relatively larger fraction of the SO(2) being converted to sulphate aerosol for the 2010 scenario, in the presence of ammonia. The effect results in less-than-proportional UK benefits of reducing in SO(2) emissions, together with greater-than-proportional benefits in reducing export of UK SO(2) emissions. Copyright 2009 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Boggs, S. E.; Lin, R. P.; Coburn, W.; Feffer, P.; Pelling, R. M.; Schroeder, P.; Slassi-Sennou, S.
1997-01-01
The balloon-borne high resolution gamma ray and X-ray germanium spectrometer (HIREGS) was used to observe the Galactic center and two positions along the Galactic plane from Antarctica in January 1995. For its flight, the collimators were configured to measure the Galactic diffuse hard X-ray continuum between 20 and 200 keV by directly measuring the point source contributions to the wide field of view flux for subtraction. The hard X-ray spectra of GX 1+4 and GRO J1655-40 were measured with the diffuse continuum subtracted off. The analysis technique for source separation is discussed and the preliminary separated spectra for these point sources and the Galactic diffuse emission are presented.
Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center
Ajello, M.
2016-02-26
The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission towards the Galactic centre (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1 - 100 GeV from a 15° X15° region about the direction of the GC, and implications for the interstellar emissions produced by cosmic ray (CR) particles interacting with the gas and radiation fields in the inner Galaxy and for the point sources detected. Specialised interstellar emission models (IEMs) are constructed that enable separation ofmore » the γ-ray emission from the inner ~ 1 kpc about the GC from the fore- and background emission from the Galaxy. Based on these models, the interstellar emission from CR electrons interacting with the interstellar radiation field via the inverse Compton (IC) process and CR nuclei inelastically scattering off the gas producing γ-rays via π⁰ decays from the inner ~ 1 kpc is determined. The IC contribution is found to be dominant in the region and strongly enhanced compared to previous studies. A catalog of point sources for the 15 °X 15 °region is self-consistently constructed using these IEMs: the First Fermi–LAT Inner Galaxy point source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs, including the Third Fermi–LAT Source Catalog (3FGL). In general, the spatial density of 1FIG sources differs from those in the 3FGL, which is attributed to the different treatments of the interstellar emission and energy ranges used by the respective analyses. Three 1FIG sources are found to spatially overlap with supernova remnants (SNRs) listed in Green’s SNR catalog; these SNRs have not previously been associated with high-energy γ-ray sources. Most 3FGL sources with known multi-wavelength counterparts are also found. However, the majority of 1FIG point sources are unassociated. After subtracting the interstellar emission and point-source contributions from the data a residual is found that is a sub-dominant fraction of the total flux. But, it is brighter than the γ-ray emission associated with interstellar gas in the inner ~ 1 kpc derived for the IEMs used in this paper, and comparable to the integrated brightness of the point sources in the region for energies & 3 GeV. If spatial templates that peak toward the GC are used to model the positive residual and included in the total model for the 1515°X° region, the agreement with the data improves, but they do not account for all the residual structure. The spectrum of the positive residual modelled with these templates has a strong dependence on the choice of IEM.« less
Siemann, Julia; Herrmann, Manfred; Galashan, Daniela
2016-08-01
Usually, incongruent flanker stimuli provoke conflict processing whereas congruent flankers should facilitate task performance. Various behavioral studies reported improved or even absent conflict processing with correctly oriented selective attention. In the present study we attempted to reinvestigate these behavioral effects and to disentangle neuronal activity patterns underlying the attentional cueing effect taking advantage of a combination of the high temporal resolution of Electroencephalographic (EEG) and the spatial resolution of functional magnetic resonance imaging (fMRI). Data from 20 participants were acquired in different sessions per method. We expected the conflict-related N200 event-related potential (ERP) component and areas associated with flanker processing to show validity-specific modulations. Additionally, the spatio-temporal dynamics during cued flanker processing were examined using an fMRI-constrained source analysis approach. In the ERP data we found early differences in flanker processing between validity levels. An early centro-parietal relative positivity for incongruent stimuli occurred only with valid cueing during the N200 time window, while a subsequent fronto-central negativity was specific to invalidly cued interference processing. The source analysis additionally pointed to separate neural generators of these effects. Regional sources in visual areas were involved in conflict processing with valid cueing, while a regional source in the anterior cingulate cortex (ACC) seemed to contribute to the ERP differences with invalid cueing. Moreover, the ACC and precentral gyrus demonstrated an early and a late phase of congruency-related activity differences with invalid cueing. We discuss the first effect to reflect conflict detection and response activation while the latter more likely originated from conflict monitoring and control processes during response competition. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 429.145 - Pretreatment standards for existing sources (PSES).
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS TIMBER PRODUCTS PROCESSING POINT SOURCE CATEGORY Particleboard Manufacturing Subcategory § 429.145 Pretreatment standards for existing sources (PSES). Any existing source...