NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
Spitzer Photometry of Approximately 1 Million Stars in M31 and 15 Other Galaxies
NASA Technical Reports Server (NTRS)
Khan, Rubab
2017-01-01
We present Spitzer IRAC 3.6-8 micrometer and Multiband Imaging Photometer 24 micrometer point-source catalogs for M31 and 15 other mostly large, star-forming galaxies at distances approximately 3.5-14 Mpc, including M51, M83, M101, and NGC 6946. These catalogs contain approximately 1 million sources including approximately 859,000 in M31 and approximately 116,000 in the other galaxies. They were created following the procedures described in Khan et al. through a combination of pointspread function (PSF) fitting and aperture photometry. These data products constitute a resource to improve our understanding of the IR-bright (3.6-24 micrometer) point-source populations in crowded extragalactic stellar fields and to plan observations with the James Webb Space Telescope.
1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra
NASA Technical Reports Server (NTRS)
Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.;
2013-01-01
We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.
A very deep IRAS survey at the north ecliptic pole
NASA Technical Reports Server (NTRS)
Houck, J. R.; Hacking, P. B.; Condon, J. J.
1987-01-01
The data from approximately 20 hours observation of the 4- to 6-square degree field surrounding the north ecliptic pole have been combined to produce a very deep IR survey at the four IRAS bands. Scans from both pointed and survey observations were included in the data analysis. At 12 and 25 microns the deep survey is limited by detector noise and is approximately 50 times deeper than the IRAS Point Source Catalog (PSC). At 60 microns the problems of source confusion and Galactic cirrus combine to limit the deep survey to approximately 12 times deeper than the PSC. These problems are so severe at 100 microns that flux values are only given for locations corresponding to sources selected at 60 microns. In all, 47 sources were detected at 12 microns, 37 at 25 microns, and 99 at 60 microns. The data-analysis procedures and the significance of the 12- and 60-micron source-count results are discussed.
Time delay of critical images in the vicinity of cusp point of gravitational-lens systems
NASA Astrophysics Data System (ADS)
Alexandrov, A.; Zhdanov, V.
2016-12-01
We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.
Nustar and Chandra Insight into the Nature of the 3-40 Kev Nuclear Emission in Ngc 253
NASA Technical Reports Server (NTRS)
Lehmer, Bret D.; Wik, Daniel R.; Hornschemeier, Ann E.; Ptak, Andrew; Antoniu, V.; Argo, M.K.; Bechtol, K.; Boggs, S.; Christensen, F.E.; Craig, W.W.;
2013-01-01
We present results from three nearly simultaneous Nuclear Spectroscopic Telescope Array (NuSTAR) and Chandra monitoring observations between 2012 September 2 and 2012 November 16 of the local star-forming galaxy NGC 253. The 3-40 kiloelectron volt intensity of the inner approximately 20 arcsec (approximately 400 parsec) nuclear region, as measured by NuSTAR, varied by a factor of approximately 2 across the three monitoring observations. The Chandra data reveal that the nuclear region contains three bright X-ray sources, including a luminous (L (sub 2-10 kiloelectron volt) approximately few × 10 (exp 39) erg per s) point source located approximately 1 arcsec from the dynamical center of the galaxy (within the sigma 3 positional uncertainty of the dynamical center); this source drives the overall variability of the nuclear region at energies greater than or approximately equal to 3 kiloelectron volts. We make use of the variability to measure the spectra of this single hard X-ray source when it was in bright states. The spectra are well described by an absorbed (power-law model spectral fit value, N(sub H), approximately equal to 1.6 x 10 (exp 23) per square centimeter) broken power-law model with spectral slopes and break energies that are typical of ultraluminous X-ray sources (ULXs), but not active galactic nuclei (AGNs). A previous Chandra observation in 2003 showed a hard X-ray point source of similar luminosity to the 2012 source that was also near the dynamical center (Phi is approximately equal to 0.4 arcsec); however, this source was offset from the 2012 source position by approximately 1 arcsec. We show that the probability of the 2003 and 2012 hard X-ray sources being unrelated is much greater than 99.99% based on the Chandra spatial localizations. Interestingly, the Chandra spectrum of the 2003 source (3-8 kiloelectron volts) is shallower in slope than that of the 2012 hard X-ray source. Its proximity to the dynamical center and harder Chandra spectrum indicate that the 2003 source is a better AGN candidate than any of the sources detected in our 2012 campaign; however, we were unable to rule out a ULX nature for this source. Future NuSTAR and Chandra monitoring would be well equipped to break the degeneracy between the AGN and ULX nature of the 2003 source, if again caught in a high state.
Improved response functions for gamma-ray skyshine analyses
NASA Astrophysics Data System (ADS)
Shultis, J. K.; Faw, R. E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study, the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15 MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This re-evaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results are compared to previous calculations and benchmark data.
Improved response functions for gamma-ray skyshine analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Deng, X.
1992-09-01
A computationally simple method, based on line-beam response functions, is refined for estimating gamma skyshine dose rates. Critical to this method is the availability of an accurate approximation for the line-beam response function (LBRF). In this study the LBRF is evaluated accurately with the point-kernel technique using recent photon interaction data. Various approximations to the LBRF are considered, and a three parameter formula is selected as the most practical approximation. By fitting the approximating formula to point-kernel results, a set of parameters is obtained that allows the LBRF to be quickly and accurately evaluated for energies between 0.01 and 15more » MeV, for source-to-detector distances from 1 to 3000 m, and for beam angles from 0 to 180 degrees. This reevaluation of the approximate LBRF gives better accuracy, especially at low energies, over a greater source-to-detector range than do previous LBRF approximations. A conical beam response function is also introduced for application to skyshine sources that are azimuthally symmetric about a vertical axis. The new response functions are then applied to three simple skyshine geometries (an open silo geometry, an infinite wall, and a rectangular four-wall building) and the results compared to previous calculations and benchmark data.« less
Outdoor air pollution in close proximity to a continuous point source
NASA Astrophysics Data System (ADS)
Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul
Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.
NASA Astrophysics Data System (ADS)
Kucherov, A. N.; Makashev, N. K.; Ustinov, E. V.
1994-02-01
A procedure is proposed for numerical modeling of instantaneous and averaged (over various time intervals) distant-point-source images perturbed by a turbulent atmosphere that moves relative to the radiation receiver. Examples of image calculations under conditions of the significant effect of atmospheric turbulence in an approximation of geometrical optics are presented and analyzed.
NASA Technical Reports Server (NTRS)
Ciardi, David R.; Woodward, Charles E.; Clemens, Dan P.; Harker, David E.; Rudy, Richard J.
1998-01-01
We have performed a near-infrared JHK survey of a dense core and a diffuse filament region within the filamentary dark cloud GF 9 (LDN 1082). The core region is associated with the IRAS point source PSC 20503+6006 and is suspected of being a site of star formation. The diffuse filament region has no associated IRAS point sources and is likely quiescent. We find that neither the core nor the filament region appears to contain a Class I or Class II young stellar object. As traced by the dust extinction, the core and filament regions contain 26 and 22 solar mass, respectively, with an average H2 volume density for both regions of approximately 2500/cu cm. The core region contains a centrally condensed extinction maximum with a peak extinction of A(sub v) greater than or approximately equal to 10 mag that appears to be associated with the IRAS point source. The average H2 volume density of the extinction core is approximately 8000/cu cm. The dust within the filament, however, shows no sign of a central condensation and is consistent with a uniform-density cylindrical distribution.
NASA Technical Reports Server (NTRS)
Helou, George (Editor); Walker, D. W. (Editor)
1988-01-01
The Infrared Astronomical Satellite (IRAS) was launched January 26, 1983. During its 300-day mission, it surveyed over 96 pct of the celestial sphere at four infrared wavelengths, centered approximately at 12, 25, 60, and 100 microns. Volume 1 describes the instrument, the mission, and the data reduction process. Volumes 2 through 6 present the observations of the approximately 245,000 individual point sources detected by IRAS; each volume gives sources within a specified range of declination. Volume 7 gives the observations of the approximately 16,000 sources spatially resolved by IRAS and smaller than 8'. This is Volume 7, The Small Scale Structure Catalog.
NASA Technical Reports Server (NTRS)
Schlegel, E.; Swank, Jean (Technical Monitor)
2001-01-01
Analysis of 80 ks ASCA (Advanced Satellite for Cosmology and Astrophysics) and 60 ks ROSAT HRI (High Resolution Image) observations of the face-on spiral galaxy NGC 6946 are presented. The ASCA image is the first observation of this galaxy above approximately 2 keV. Diffuse emission may be present in the inner approximately 4' extending to energies above approximately 2-3 keV. In the HRI data, 14 pointlike sources are detected, the brightest two being a source very close to the nucleus and a source to the northeast that corresponds to a luminous complex of interacting supernova remnants (SNRs). We detect a point source that lies approximately 30" west of the SNR complex but with a luminosity -1115 of the SNR complex. None of the point sources show evidence of strong variability; weak variability would escape our detection. The ASCA spectrum of the SNR complex shows evidence for an emission line at approximately 0.9 keV that could be either Ne IX at approximately 0.915 keV or a blend of ion stages of Fe L-shell emission if the continuum is fitted with a power law. However, a two-component, Raymond-Smith thermal spectrum with no lines gives an equally valid continuum fit and may be more physically plausible given the observed spectrum below 3 keV. Adopting this latter model, we derive a density for the SNR complex of 10-35 cm(exp -3), consistent with estimates inferred from optical emission-line ratios. The complex's extraordinary X-ray luminosity may be related more to the high density of the surrounding medium than to a small but intense interaction region where two of the complex's SNRs are apparently colliding.
NASA Technical Reports Server (NTRS)
Pearson, T. J.; Mason, B. S.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J. L.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.;
2002-01-01
Using the Cosmic Background Imager, a 13-element interferometer array operating in the 26-36 GHz frequency band, we have observed 40 deg (sup 2) of sky in three pairs of fields, each approximately 145 feet x 165 feet, using overlapping pointings: (mosaicing). We present images and power spectra of the cosmic microwave background radiation in these mosaic fields. We remove ground radiation and other low-level contaminating signals by differencing matched observations of the fields in each pair. The primary foreground contamination is due to point sources (radio galaxies and quasars). We have subtracted the strongest sources from the data using higher-resolution measurements, and we have projected out the response to other sources of known position in the power-spectrum analysis. The images show features on scales approximately 6 feet-15 feet, corresponding to masses approximately 5-80 x 10(exp 14) solar mass at the surface of last scattering, which are likely to be the seeds of clusters of galaxies. The power spectrum estimates have a resolution delta l approximately 200 and are consistent with earlier results in the multipole range l approximately less than 1000. The power spectrum is detected with high signal-to-noise ratio in the range 300 approximately less than l approximately less than 1700. For 1700 approximately less than l approximately less than 3000 the observations are consistent with the results from more sensitive CBI deep-field observations. The results agree with the extrapolation of cosmological models fitted to observations at lower l, and show the predicted drop at high l (the "damping tail").
S. Scesa; F. M. Sauer
1954-01-01
The transfer theory is applied to the problem of atmospheric diffusion of momentum and heat induced by line and point sources of heat on the surface of the earth. In order that the validity of the approximations of the boundary layer theory be realized, the thickness of the layer in which the temperatures and velocities differ appreciably from the values at...
NASA Technical Reports Server (NTRS)
1988-01-01
The Infrared Astronomical Satellite (IRAS) was launched January 26, 1983. During its 300-day mission, IRAS surveyed 96 pct of the celestial sphere at four infrared wavelengths, centered approximately at 12, 25, 60, and 100 microns. This is Volume 2, The Point Source Catalog Declination Range 90 deg greater than delta greater than 30 deg.
Mellow, Tim; Kärkkäinen, Leo
2014-03-01
An acoustic curtain is an array of microphones used for recording sound which is subsequently reproduced through an array of loudspeakers in which each loudspeaker reproduces the signal from its corresponding microphone. Here the sound originates from a point source on the axis of symmetry of the circular array. The Kirchhoff-Helmholtz integral for a plane circular curtain is solved analytically as fast-converging expansions, assuming an ideal continuous array, to speed up computations and provide insight. By reversing the time sequence of the recording (or reversing the direction of propagation of the incident wave so that the point source becomes an "ideal" point sink), the curtain becomes a time reversal mirror and the analytical solution for this is given simultaneously. In the case of an infinite planar array, it is demonstrated that either a monopole or dipole curtain will reproduce the diverging sound field of the point source on the far side. However, although the real part of the sound field of the infinite time-reversal mirror is reproduced, the imaginary part is an approximation due to the missing singularity. It is shown that the approximation may be improved by using the appropriate combination of monopole and dipole sources in the mirror.
MICROBIAL SOURCE TRACKING GUIDE DOCUMENT
Approximately 13% of surface waters in the United States do not meet designated use criteria as determined by high densities of fecal indicator bacteria. Although some of the contamination is attributed to point sources such as confined animal feeding operation (CAFO) and wastew...
Scott, Jill R.; Tremblay, Paul L.
2008-08-19
A laser device includes a virtual source configured to aim laser energy that originates from a true source. The virtual source has a vertical rotational axis during vertical motion of the virtual source and the vertical axis passes through an exit point from which the laser energy emanates independent of virtual source position. The emanating laser energy is collinear with an orientation line. The laser device includes a virtual source manipulation mechanism that positions the virtual source. The manipulation mechanism has a center of lateral pivot approximately coincident with a lateral index and a center of vertical pivot approximately coincident with a vertical index. The vertical index and lateral index intersect at an index origin. The virtual source and manipulation mechanism auto align the orientation line through the index origin during virtual source motion.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
MacBurn's cylinder test problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shestakov, Aleksei I.
2016-02-29
This note describes test problem for MacBurn which illustrates its performance. The source is centered inside a cylinder with axial-extent-to-radius ratio s.t. each end receives 1/4 of the thermal energy. The source (fireball) is modeled as either a point or as disk of finite radius, as described by Marrs et al. For the latter, the disk is divided into 13 equal area segments, each approximated as a point source and models a partially occluded fireball. If the source is modeled as a single point, one obtains very nearly the expected deposition, e.g., 1/4 of the flux on each end andmore » energy is conserved. If the source is modeled as a disk, both conservation and energy fraction degrade. However, errors decrease if the source radius to domain size ratio decreases. Modeling the source as a disk increases run-times.« less
NASA Technical Reports Server (NTRS)
1988-01-01
The Infrared Astronomical Satellite (IRAS) was launched 26 January 1983. During its 300-day mission, it surveyed over 96 pct of the celestial sphere at four infrared wavelengths, centered approximately at 12, 25, 60, and 100 microns. This is Volume 4, The Point Source Catalog Declination Range 0 deg greater than delta greater than -30 deg.
NASA Technical Reports Server (NTRS)
1988-01-01
The Infrared Astronomical Satellite (IRAS) was launched January 26, 1983. During its 300-day mission, IRAS surveyed over 96 pct of the celestial sphere at four infrared wavelengths, centered approximately at 12, 25, 60, and 100 microns. This is Volume 3, The Point Source Catalog Declination Range 30 deg greater than delta greater than 0 deg.
NASA Technical Reports Server (NTRS)
1988-01-01
The Infrared Astronomical Satellite (IRAS) was launched January 26, 1983. During its 300-day mission, it surveyed over 96 pct of the celestial sphere at four infrared wavelengths, centered approximately at 12, 25, 60, and 100 microns. This is Volume 6, The Point Source Catalog Declination Range -50 deg greater than delta greater than -90 deg.
NASA Technical Reports Server (NTRS)
1988-01-01
The Infrared Astronomical Satellite (IRAS) was launched January 26, 1983. During its 300-day mission, IRAS surveyed over 96 pct of the celestial sphere at four infrared wavelengths, centered approximately at 12, 25, 60, and 100 microns. This is Volume 5, The Point Source Catalog Declination Range -30 deg greater than delta greater than -50 deg.
NASA Astrophysics Data System (ADS)
Cassan, Arnaud
2017-07-01
The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.
The gamma ray continuum spectrum from the galactic center disk and point sources
NASA Technical Reports Server (NTRS)
Gehrels, Neil; Tueller, Jack
1992-01-01
A light curve of gamma-ray continuum emission from point sources in the galactic center region is generated from balloon and satellite observations made over the past 25 years. The emphasis is on the wide field-of-view instruments which measure the combined flux from all sources within approximately 20 degrees of the center. These data have not been previously used for point-source analyses because of the unknown contribution from diffuse disk emission. In this study, the galactic disk component is estimated from observations made by the Gamma Ray Imaging Spectrometer (GRIS) instrument in Oct. 1988. Surprisingly, there are several times during the past 25 years when all gamma-ray sources (at 100 keV) within about 20 degrees of the galactic center are turned off or are in low emission states. This implies that the sources are all variable and few in number. The continuum gamma-ray emission below approximately 150 keV from the black hole candidate 1E1740.7-2942 is seen to turn off in May 1989 on a time scale of less than two weeks, significantly shorter than ever seen before. With the continuum below 150 keV turned off, the spectral shape derived from the HEXAGONE observation on 22 May 1989 is very peculiar with a peak near 200 keV. This source was probably in its normal state for more than half of all observations since the mid-1960's. There are only two observations (in 1977 and 1979) for which the sum flux from the point sources in the region significantly exceeds that from 1E1740.7-2942 in its normal state.
Approximation for the Rayleigh Resolution of a Circular Aperture
ERIC Educational Resources Information Center
Mungan, Carl E.
2009-01-01
Rayleigh's criterion states that a pair of point sources are barely resolved by an optical instrument when the central maximum of the diffraction pattern due to one source coincides with the first minimum of the pattern of the other source. As derived in standard introductory physics textbooks, the first minimum for a rectangular slit of width "a"…
Point and Compact Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.
2017-12-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng
2015-01-26
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.
VizieR Online Data Catalog: ALMA 106GHz continuum observations in Chamaeleon I (Dunham+, 2016)
NASA Astrophysics Data System (ADS)
Dunham, M. M.; Offner, S. S. R.; Pineda, J. E.; Bourke, T. L.; Tobin, J. J.; Arce, H. G.; Chen, X.; di, Francesco J.; Johnstone, D.; Lee, K. I.; Myers, P. C.; Price, D.; Sadavoy, S. I.; Schnee, S.
2018-02-01
We obtained ALMA observations of every source in Chamaleon I detected in the single-dish 870 μm LABOCA survey by Belloche et al. (2011, J/A+A/527/A145), except for those listed as likely artifacts (1 source), residuals from bright sources (7 sources), or detections tentatively associated with YSOs (3 sources). We observed 73 sources from the initial list of 84 objects identified by Belloche et al. (2011, J/A+A/527/A145). We observed the 73 pointings using the ALMA Band 3 receivers during its Cycle 1 campaign between 2013 November 29 and 2014 March 08. Between 25 and 27 antennas were available for our observations, with the array configured in a relatively compact configuration to provide a resolution of approximately 2" FWHM (300 AU at the distance to Chamaeleon I). Each target was observed in a single pointing with approximately 1 minute of on-source integration time. Three out of the four available spectral windows were configured to measure the continuum at 101, 103, and 114 GHz, each with a bandwidth of 2 GHz, for a total continuum bandwidth of 6 GHz (2.8 mm) at a central frequency of 106 GHz. (2 data files).
Beaucamp, Sylvain; Mathieu, Didier; Agafonov, Viatcheslav
2005-09-01
A method to estimate the lattice energies E(latt) of nitrate salts is put forward. First, E(latt) is approximated by its electrostatic component E(elec). Then, E(elec) is correlated with Mulliken atomic charges calculated on the species that make up the crystal, using a simple equation involving two empirical parameters. The latter are fitted against point charge estimates of E(elec) computed on available X-ray structures of nitrate crystals. The correlation thus obtained yields lattice energies within 0.5 kJ/g from point charge values. A further assessment of the method against experimental data suggests that the main source of error arises from the point charge approximation.
Direct Measurement of Wave Kernels in Time-Distance Helioseismology
NASA Technical Reports Server (NTRS)
Duvall, T. L., Jr.
2006-01-01
Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
First Neutrino Point-Source Results from the 22 String Icecube Detector
NASA Astrophysics Data System (ADS)
Abbasi, R.; Abdou, Y.; Ackermann, M.; Adams, J.; Aguilar, J.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Boersma, D. J.; Bohm, C.; Bolmont, J.; Böser, S.; Botner, O.; Bradley, L.; Braun, J.; Breder, D.; Castermans, T.; Chirkin, D.; Christy, B.; Clem, J.; Cohen, S.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Day, C. T.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; De Young, T.; Diaz-Velez, J. C.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Edwards, W. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Ganugapati, R.; Gerhardt, L.; Gladstone, L.; Goldschmidt, A.; Goodman, J. A.; Gozzini, R.; Grant, D.; Griesel, T.; Groß, A.; Grullon, S.; Gunasingha, R. M.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Hasegawa, Y.; Heise, J.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hülß, J.-P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Imlay, R. L.; Inaba, M.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K.-H.; Kappes, A.; Karg, T.; Karle, A.; Kelley, J. L.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Klepser, S.; Knops, S.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Lauer, R.; Leich, H.; Lennarz, D.; Lucke, A.; Lundberg, J.; Lünemann, J.; Madsen, J.; Majumdar, P.; Maruyama, R.; Mase, K.; Matis, H. S.; McParland, C. P.; Meagher, K.; Merck, M.; Mészáros, P.; Middell, E.; Milke, N.; Miyamoto, H.; Mohr, A.; Montaruli, T.; Morse, R.; Movit, S. M.; Münich, K.; Nahnhauer, R.; Nam, J. W.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; Ono, M.; Panknin, S.; Patton, S.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Pohl, A. C.; Porrata, R.; Potthoff, N.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Roucelle, C.; Rutledge, D.; Ryckbosch, D.; Sander, H.-G.; Sarkar, S.; Satalecka, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schukraft, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoufer, M. C.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sulanke, K.-H.; Sullivan, G. W.; Swillens, Q.; Taboada, I.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Terranova, C.; Tilav, S.; Tluczykont, M.; Toale, P. A.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; Voigt, B.; Walck, C.; Waldenmaier, T.; Walter, M.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebusch, C. H.; Wiedemann, A.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Xu, X. W.; Yodh, G.; Ice Cube Collaboration
2009-08-01
We present new results of searches for neutrino point sources in the northern sky, using data recorded in 2007-2008 with 22 strings of the IceCube detector (approximately one-fourth of the planned total) and 275.7 days of live time. The final sample of 5114 neutrino candidate events agrees well with the expected background of atmospheric muon neutrinos and a small component of atmospheric muons. No evidence of a point source is found, with the most significant excess of events in the sky at 2.2σ after accounting for all trials. The average upper limit over the northern sky for point sources of muon-neutrinos with E -2 spectrum is E^{2} Φ_{ν_{μ}} < 1.4 × 10^{-11} TeV cm^{-2} s^{-1}, in the energy range from 3 TeV to 3 PeV, improving the previous best average upper limit by the AMANDA-II detector by a factor of 2.
Accelerator test of the coded aperture mask technique for gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.
1982-01-01
A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.
Impacts of the Detection of Cassiopeia A Point Source.
Umeda; Nomoto; Tsuruta; Mineshige
2000-05-10
Very recently the Chandra first light observation discovered a point-like source in the Cassiopeia A supernova remnant. This detection was subsequently confirmed by the analyses of the archival data from both ROSAT and Einstein observations. Here we compare the results from these observations with the scenarios involving both black holes (BHs) and neutron stars (NSs). If this point source is a BH, we offer as a promising model a disk-corona type model with a low accretion rate in which a soft photon source at approximately 0.1 keV is Comptonized by higher energy electrons in the corona. If it is an NS, the dominant radiation observed by Chandra most likely originates from smaller, hotter regions of the stellar surface, but we argue that it is still worthwhile to compare the cooler component from the rest of the surface with cooling theories. We emphasize that the detection of this point source itself should potentially provide enormous impacts on the theories of supernova explosion, progenitor scenario, compact remnant formation, accretion to compact objects, and NS thermal evolution.
An Exact Algebraic Evaluation of Path-Length Difference for Two-Source Interference
ERIC Educational Resources Information Center
Hopper, Seth; Howell, John
2006-01-01
When studying wave interference, one often wants to know the difference in path length for two waves arriving at a common point P but coming from adjacent sources. For example, in many contexts interference maxima occur where this path-length difference is an integer multiple of the wavelength. The standard approximation for the path-length…
NASA Astrophysics Data System (ADS)
Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.
1995-01-01
The diffusion approximation, which is often used to describe the propagation of light in biological tissues, is only good at a sufficient distance from sources and boundaries. Light- tissue interaction is however most intense in the region close to the source. It would therefore be interesting to study this region more closely. Although scattering in biological tissues is predominantly forward peaked, explicit solutions to the transport equation have only been obtained in the case of isotropic scattering. Particularly, for the case of an isotropic point source in an unbounded, isotropically scattering medium the solution is well known. We show that this problem can also be solved analytically if the scattering is no longer isotropic, while everything else remains the same.
Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope
NASA Technical Reports Server (NTRS)
Zissa, D. E.
1984-01-01
Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.
Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope
NASA Astrophysics Data System (ADS)
Ackermann, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Berenji, B.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Bregeon, J.; Bruel, P.; Buehler, R.; Cameron, R. A.; Caputo, R.; Caraveo, P. A.; Cavazzuti, E.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Conrad, J.; Costantin, D.; D’Ammando, F.; de Palma, F.; Digel, S. W.; Di Lalla, N.; Di Mauro, M.; Di Venere, L.; Favuzzi, C.; Fegan, S. J.; Focke, W. B.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giordano, F.; Giroletti, M.; Green, D.; Grenier, I. A.; Guillemot, L.; Guiriec, S.; Horan, D.; Jóhannesson, G.; Johnson, C.; Kensei, S.; Kocevski, D.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J. D.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mazziotta, M. N.; McEnery, J. E.; Meyer, M.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Negro, M.; Nuss, E.; Ojha, R.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Palatiello, M.; Paliya, V. S.; Paneque, D.; Persic, M.; Pesce-Rollins, M.; Piron, F.; Principe, G.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sánchez-Conde, M.; Sgrò, C.; Siskind, E. J.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Thayer, J. G.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Valverde, J.; Vianello, G.; Wood, K.; Wood, M.; Zaharijas, G.
2018-04-01
Black holes with masses below approximately 1015 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 1011 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth, {\\dot{ρ }}PBH}< 7.2× {10}3 {pc}}-3 {yr}}-1. This limit is similar to the limits obtained with ground-based gamma-ray observatories.
Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope
Ackermann, M.; Atwood, W. B.; Baldini, L.; ...
2018-04-10
Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less
Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Atwood, W. B.; Baldini, L.
Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less
Polarization from Thomson scattering of the light of a spherical, limb-darkened star
NASA Technical Reports Server (NTRS)
Rudy, R. J.
1979-01-01
The polarized flux produced by the Thomson scattering of the light of a spherical, limb-darkened star by optically thin, extrastellar regions of electrons is calculated and contrasted to previous models which treated the star as a point source. The point-source approximation is found to be valid for scattering by particles more than a stellar radius from the surface of the star but is inappropriate for those lying closer. The specific effect of limb darkening on the fractional polarization of the total light of a system is explored. If the principal source of light is the unpolarized flux of the star, the polarization is nearly independent of limb darkening.
Methane bubbling from northern lakes: present and future contributions to the global methane budget.
Walter, Katey M; Smith, Laurence C; Chapin, F Stuart
2007-07-15
Large uncertainties in the budget of atmospheric methane (CH4) limit the accuracy of climate change projections. Here we describe and quantify an important source of CH4 -- point-source ebullition (bubbling) from northern lakes -- that has not been incorporated in previous regional or global methane budgets. Employing a method recently introduced to measure ebullition more accurately by taking into account its spatial patchiness in lakes, we estimate point-source ebullition for 16 lakes in Alaska and Siberia that represent several common northern lake types: glacial, alluvial floodplain, peatland and thermokarst (thaw) lakes. Extrapolation of measured fluxes from these 16 sites to all lakes north of 45 degrees N using circumpolar databases of lake and permafrost distributions suggests that northern lakes are a globally significant source of atmospheric CH4, emitting approximately 24.2+/-10.5Tg CH4yr(-1). Thermokarst lakes have particularly high emissions because they release CH4 produced from organic matter previously sequestered in permafrost. A carbon mass balance calculation of CH4 release from thermokarst lakes on the Siberian yedoma ice complex suggests that these lakes alone would emit as much as approximately 49000Tg CH4 if this ice complex was to thaw completely. Using a space-for-time substitution based on the current lake distributions in permafrost-dominated and permafrost-free terrains, we estimate that lake emissions would be reduced by approximately 12% in a more probable transitional permafrost scenario and by approximately 53% in a 'permafrost-free' Northern Hemisphere. Long-term decline in CH4 ebullition from lakes due to lake area loss and permafrost thaw would occur only after the large release of CH4 associated thermokarst lake development in the zone of continuous permafrost.
FIRST-ORDER COSMOLOGICAL PERTURBATIONS ENGENDERED BY POINT-LIKE MASSES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eingorn, Maxim, E-mail: maxim.eingorn@gmail.com
2016-07-10
In the framework of the concordance cosmological model, the first-order scalar and vector perturbations of the homogeneous background are derived in the weak gravitational field limit without any supplementary approximations. The sources of these perturbations (inhomogeneities) are presented in the discrete form of a system of separate point-like gravitating masses. The expressions found for the metric corrections are valid at all (sub-horizon and super-horizon) scales and converge at all points except at the locations of the sources. The average values of these metric corrections are zero (thus, first-order backreaction effects are absent). Both the Minkowski background limit and the Newtonianmore » cosmological approximation are reached under certain well-defined conditions. An important feature of the velocity-independent part of the scalar perturbation is revealed: up to an additive constant, this part represents a sum of Yukawa potentials produced by inhomogeneities with the same finite time-dependent Yukawa interaction range. The suggested connection between this range and the homogeneity scale is briefly discussed along with other possible physical implications.« less
Response functions for neutron skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gui, A.A.; Shultis, J.K.; Faw, R.E.
1997-02-01
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
XMM-Newton Archival Study of the ULX Population in Nearby Galaxies
NASA Technical Reports Server (NTRS)
Winter, Lisa M.; Mushotzky, Richard F.; Reynolds, christopher S.
2006-01-01
We present the results of an archival XMM-Newton study of the bright X-ray point sources (L(sub X) greater than 10(exp 38 erg per second)) in 32 nearby galaxies. From our list of approximately 100 point sources, we attempt to determine if there is a low-state counterpart to the Ultraluminous X-ray (ULX) population, searching for a soft-hard state dichotomy similar to that known for Galactic X-ray binaries and testing the specific predictions of the IMBH hypothesis. To this end, we searched for low-state objects, which we defined as objects within our sample which had a spectrum well fit by a simple absorbed power law, and high-state objects, which we defined as objects better fit by a combined blackbody and a power law. Assuming that low-state)) objects accrete at approximately 10% of the Eddington luminosity (Done & Gierlinski 2003) and that high-state objects accrete near the Eddington luminosity we further divided our sample of sources into low and high state ULX sources. We classify 16 sources as low-state ULXs and 26 objects as high-state ULXs. As in Galactic black hole systems, the spectral indices, GAMMA, of the lowstate objects, as well as the luminosities, tend to be lower than those of the high-state objects. The observed range of blackbody temperatures for the high state is 0.1-1 keV, with the most luminous systems tending toward the lowest temperatures. We therefore divide our high-state ULXs into candidate IMBHs (with blackbody temperatures of approximately 0.1 keV) and candidate stellar mass BHs (with blackbody temperatures of approximately 1.0 keV). A subset of the candidate stellar mass BHs have spectra that are well-fit by a Comptonization model, a property similar of Galactic BHs radiating in the very-high state near the Eddington limit.
Development of Parameters for the Collection and Analysis of Lidar at Military Munitions Sites
2010-01-01
and inertial measurement unit (IMU) equipment is used to locate the sensor in the air . The time of return of the laser signal allows for the...approximately 15 centimeters (cm) on soft ground surfaces and a horizontal accuracy of approximately 60 cm, both compared to surveyed control points...provide more accurate topographic data than other sources, at a reasonable cost compared to alternatives such as ground survey or photogrammetry
2008-08-01
Mason et al. (1998) survey. 3.3. 2MASS Data Mining Confirmations Searches were made for Two Micron All Sky Survey ( 2MASS ) (Cutri et al. 2003...the separation/m limits of 2MASS , the point-source catalog was searched for sources in the magnitude range 5.5 J 8.0, corresponding to the...approximate 2MASS J -magnitude range for the AO targets in this project. This yielded 99,656 sources. All sources within 10′′ of these “primaries” were then
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
First Near-infrared Imaging Polarimetry of Young Stellar Objects in the Circinus Molecular Cloud
NASA Astrophysics Data System (ADS)
Kwon, Jungmi; Nakagawa, Takao; Tamura, Motohide; Hough, James H.; Choi, Minho; Kandori, Ryo; Nagata, Tetsuya; Kang, Miju
2018-02-01
We present the results of near-infrared (NIR) linear imaging polarimetry in the J, H, and K s bands of the low-mass star cluster-forming region in the Circinus Molecular Cloud Complex. Using aperture polarimetry of point-like sources, positive detection of 314, 421, and 164 sources in the J, H, and K s bands, respectively, was determined from among 749 sources whose photometric magnitudes were measured. For the source classification of the 133 point-like sources whose polarization could be measured in all 3 bands, a color–color diagram was used. While most of the NIR polarizations of point-like sources are well-aligned and can be explained by dichroic polarization produced by aligned interstellar dust grains in the cloud, 123 highly polarized sources have also been identified with some criteria. The projected direction on the sky of the magnetic field in the Cir-MMS region is indicated by the mean polarization position angles (70°) of the point-like sources in the observed region, corresponding to approximately 1.6× 1.6 pc2. In addition, the magnetic field direction is compared with the outflow orientations associated with Infrared Astronomy Satellite sources, in which two sources were found to be aligned with each other and one source was not. We also show prominent polarization nebulosities over the Cir-MMS region for the first time. Our polarization data have revealed one clear infrared reflection nebula (IRN) and several candidate IRNe in the Cir-MMS field. In addition, the illuminating sources of the IRNe are identified with near- and mid-infrared sources.
Comparison of methods for assessing photoprotection against ultraviolet A in vivo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaidbey, K.; Gange, R.W.
Photoprotection against ultraviolet A (UVA) by three sunscreens was evaluated in humans, with erythema and pigmentation used as end points in normal skin and in skin sensitized with 8-methoxypsoralen and anthracene. The test sunscreens were Parsol 1789 (2%), Eusolex 8020 (2%), and oxybenzone (3%). UVA was obtained from two filtered xenon-arc sources. UVA protection factors were found to be significantly higher in sensitized skin compared with normal skin. Both Parsol and Eusolex provided better and comparable photoprotection (approximately 3.0) than oxybenzone (approximately 2.0) in sensitized skin, regardless of whether 8-methoxypsoralen or anthracene was used. In normal unsensitized skin, Parsol 1789more » and Eusolex 8020 were also comparable and provided slightly better photoprotection (approximately 1.8) than oxybenzone (approximately 1.4) when pigmentation was used as an end point. The three sunscreens, however, were similar in providing photoprotection against UVA-induced erythema. Protection factors obtained in artificially sensitized skin are probably not relevant to normal skin. It is concluded that pigmentation, either immediate or delayed, is a reproducible and useful end point for the routine assessment of photoprotection of normal skin against UVA.« less
DEVELOPING SEASONAL AMMONIA EMISSION ESTIMATES WITH AN INVERSE MODELING TECHNIQUE
Significant uncertainty exists in magnitude and variability of ammonia (NH3) emissions, which are needed for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural non-point sources. We sus...
Solving Laplace equation to investigate the volcanic ground deformation pattern
NASA Astrophysics Data System (ADS)
Brahmi, Mouna; Castaldo, Raffaele; Barone, Andrea; Fedi, Maurizio; Tizzani, Pietro
2017-04-01
Volcanic eruptions are generally preceded by unrest phenomena, which are characterized by variations in the geophysical and geochemical state of the system. The most evident unrest parameters are the spatial and temporal topographic changes, which typically result in uplift or subsidence of the volcano edifice, usually caused by magma accumulation or hot fluid concentration in shallow reservoirs (Denasoquo et al., 2009). If the observed ground deformation phenomenon is very quick and the time evolution of the process shows a linear tendency, we can approximate the problem by using an elastic rheology model of the crust beneath the volcano. In this scenario, by considering the elastic field theory under the Boussinesq (1885) and Love (1892) approximations, we can evaluate the displacement field induced by a generic source in a homogeneous, elastic, half-space at an arbitrary point. To this purpose, we use the depth to extreme points (DEXP) method. By using this approach, we are able to estimate the depth and the geometry of the active source, responsible of the observed ground deformation.
Hubble Space Telescope NICMOS Polarization Measurements of OMC-1
NASA Technical Reports Server (NTRS)
Simpson, Janet P.; Colgan, Sean W. J.; Erickson, Edwin F.; Burton, Michael G.; Schultz, A. S. B.
2006-01-01
We present 2 micrometer polarization measurements of positions in the BN region of the Orion Molecular Cloud (OMC-1) made with NICMOS Camera 2 (0.2" resolution) on Hubble Space Telescope. Our goals are to seek the sources of heating for IRc2, 3, 4, and 7, identify possible young stellar objects (YSOs), and characterize the grain alignment in the dust clouds along the lines-of-sight to the stars. Our results are as follows: BN is approximately 29% polarized by dichroic absorption and appears to be the illuminating source for most of the nebulosity to its north and up to approximately 5" to its south. Although the stars are probably all polarized by dichroic absorption, there are a number of compact, but non-point-source, objects that could be polarized by a combination of both dichroic absorption and local scattering of star light. We identify several candidate YSOs, including an approximately edge-on bipolar YSO 8.7" east of BN, and a deeply-embedded IRc7, all of which are obviously self-luminous at mid-infrared wavelengths and may be YSOs. None of these is a reflection nebula illuminated by a star located near radio source I, as was previously suggested. Other IRc sources are clearly reflection nebulae: IRc3 appears to be illuminated by IRc2-B or a combination of the IRc2 sources, and IRc4 and IRc5 appear to be illuminated by an unseen star in the vicinity of radio source I, or by Star n or IRc2-A. Trends in the magnetic field direction are inferred from the polarization of the 26 stars that are bright enough to be seen as NICMOS point sources. Their polarization ranges from N less than or equal to 1% (all stars with this low polarization are optically visible) to greater than 40%. The most polarized star has a polarization position angle different from its neighbors by approximately 40 degrees, but in agreement with the grain alignment inferred from millimeter polarization measurements of the cold dust cloud in the southern part of OMC-1. The polarization position angle of another highly-polarized, probable star also requires a grain alignment and magnetic field orientation substantially different from the general magnetic field orientation of OMC-1.
The excitation of long period seismic waves by a source spanning a structural discontinuity
NASA Astrophysics Data System (ADS)
Woodhouse, J. H.
Simple theoretical results are obtained for the excitation of seismic waves by an indigenous seismic source in the case that the source volume is intersected by a structural discontinuity. In the long wavelength approximation the seismic radiation is identical to that of a point source placed on one side of the discontinuity or of a different point source placed on the other side. The moment tensors of these two equivalent sources are related by a specific linear transformation and may differ appreciably both in magnitude and geometry. Either of these sources could be obtained by linear inversion of seismic data but the physical interpretation is more complicated than in the usual case. A source which involved no volume change would, for example, yield an isotropic component if, during inversion, it were assumed to lie on the wrong side of the discontinuity. The problem of determining the true moment tensor of the source is indeterminate unless further assumptions are made about the stress glut distribution; one way to resolve this indeterminancy is to assume proportionality between the integrated stress glut on each side of the discontinuity.
An analysis of lamp irradiation in ellipsoidal mirror furnaces
NASA Astrophysics Data System (ADS)
Rivas, Damián; Vázquez-Espí, Carlos
2001-03-01
The irradiation generated by halogen lamps in ellipsoidal mirror furnaces is analyzed, in configurations suited to the study of the floating-zone technique for crystal growth in microgravity conditions. A line-source model for the lamp (instead of a point source) is developed, so that the longitudinal extent of the filament is taken into account. With this model the case of defocussed lamps can be handle analytically. In the model the lamp is formed by an aggregate of point-source elements, placed along the axis of the ellipsoid. For these point sources (which, in general, are defocussed) an irradiation model is formulated, within the approximation of geometrical optics. The irradiation profiles obtained (both on the lateral surface and on the inner base of the cylindrical sample) are analyzed. They present singularities related to the caustics formed by the family of reflected rays; these caustics are also analyzed. The lamp model is combined with a conduction-radiation model to study the temperature field in the sample. The effects of defocussing the lamp (common practice in crystal growth) are studied; advantages and also some drawbacks are pointed out. Comparison with experimental results is made.
New theory on the reverberation of rooms. [considering sound wave travel time
NASA Technical Reports Server (NTRS)
Pujolle, J.
1974-01-01
The inadequacy of the various theories which have been proposed for finding the reverberation time of rooms can be explained by an attempt to examine what might occur at a listening point when image sources of determined acoustic power are added to the actual source. The number and locations of the image sources are stipulated. The intensity of sound at the listening point can be calculated by means of approximations whose conditions for validity are given. This leads to the proposal of a new expression for the reverberation time, yielding results which fall between those obtained through use of the Eyring and Millington formulae; these results are made to depend on the shape of the room by means of a new definition of the mean free path.
New developments in ALFT's soft x-ray point sources
NASA Astrophysics Data System (ADS)
Cintron, Dario F.; Guo, Xiaoming; Xu, Meisheng; Ye, Rubin; Antoshko, Yuriy; Antoshko, Yuriy; Drew, Steve; Philippe, Albert; Panarella, Emilio
2002-07-01
The new development in ALFT soft X-ray point source VSX-400 consists mainly of an improvement of the nozzle design to reduce the source size, as well as the introduction of a novel trigger system, capable of triggering the discharge hundreds of million of times without failure, and a debris removal system. Continuous operation for 8 hours at 20 kHz allows us to achieve 400 mW of useful soft X-ray radiation around 1 nm wavelength. In another regime of operation with a high energy machine, the VSX-Z, we have been able to achieve consistently 10 J of X-rays per pulse at a repetition rate that can reach 1 Hz with an input electrical energy of approximately 3 kJ and an efficiency in excess of 10-3.
Response Functions for Neutron Skyshine Analyses
NASA Astrophysics Data System (ADS)
Gui, Ah Auu
Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.
Distribution and sources of polyfluoroalkyl substances (PFAS) in the River Rhine watershed.
Möller, Axel; Ahrens, Lutz; Surm, Renate; Westerveld, Joke; van der Wielen, Frans; Ebinghaus, Ralf; de Voogt, Pim
2010-10-01
The concentration profile of 40 polyfluoroalkyl substances (PFAS) in surface water along the River Rhine watershed from the Lake Constance to the North Sea was investigated. The aim of the study was to investigate the influence of point as well as diffuse sources, to estimate fluxes of PFAS into the North Sea and to identify replacement compounds of perfluorooctane sulfonate (PFOS) and perfluorooctanoic acid (PFOA). In addition, an interlaboratory comparison of the method performance was conducted. The PFAS pattern was dominated by perfluorobutane sulfonate (PFBS) and perfluorobutanoic acid (PFBA) with concentrations up to 181 ng/L and 335 ng/L, respectively, which originated from industrial point sources. Fluxes of SigmaPFAS were estimated to be approximately 6 tonnes/year which is much higher than previous estimations. Both, the River Rhine and the River Scheldt, seem to act as important sources of PFAS into the North Sea. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
On the Motion of Agents across Terrain with Obstacles
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.
2018-01-01
The paper is devoted to finding the time optimal route of an agent travelling across a region from a given source point to a given target point. At each point of this region, a maximum allowed speed is specified. This speed limit may vary in time. The continuous statement of this problem and the case when the agent travels on a grid with square cells are considered. In the latter case, the time is also discrete, and the number of admissible directions of motion at each point in time is eight. The existence of an optimal solution of this problem is proved, and estimates of the approximate solution obtained on the grid are obtained. It is found that decreasing the size of cells below a certain limit does not further improve the approximation. These results can be used to estimate the quasi-optimal trajectory of the agent motion across the rugged terrain produced by an algorithm based on a cellular automaton that was earlier developed by the author.
A new continuous light source for high-speed imaging
NASA Astrophysics Data System (ADS)
Paton, R. T.; Hall, R. E.; Skews, B. W.
2017-02-01
Xenon arc lamps have been identified as a suitable continuous light source for high-speed imaging, specifically high-speed Schlieren and shadowgraphy. One issue when setting us such systems is the time that it takes to reduce a finite source to the approximation of a point source for z-type schlieren. A preliminary design of a compact compound lens for use with a commercial Xenon arc lamp was tested for suitability. While it was found that there is some dimming of the illumination at the spot periphery, the overall spectral and luminance distribution of the compact source is quite acceptable, especially considering the time benefit that it represents.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael S.
2016-01-01
Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plumes axis. For sonic plumes this ratio is reduced to about 43. For high Mach number cases the maximum CND will be found along the axial centerline path.
Utilizing water treatment residuals to reduce phosphorus runoff from biosolids
USDA-ARS?s Scientific Manuscript database
Approximately 40% of biosolids (sewage sludge) produced in the U.S. are incinerated or landfilled rather than land applied due to concern over non-point source phosphorus (P) runoff. The objective of this study was to determine the impact of chemical amendments on water-extractable P (WEP) in appli...
This paper addresses the general problem of estimating at arbitrary locations the value of an unobserved quantity that varies over space, such as ozone concentration in air or nitrate concentrations in surface groundwater, on the basis of approximate measurements of the quantity ...
NASA Astrophysics Data System (ADS)
Turcu, I. C. E.; Ross, I. N.; Schulz, M. S.; Daido, H.; Tallents, G. J.; Krishnan, J.; Dwivedi, L.; Hening, A.
1993-06-01
The properties of a coherent x-ray point source in the water window spectral region generated using a small commercially available KrF laser system focused onto a Mylar (essentially carbon) target have been measured. By operating the source in a low-pressure (approximately 20 Torr) nitrogen environment, the degree of monochromaticity was improved due to the nitrogen acting as an x-ray filter and relatively enhancing the radiation at a wavelength of 3.37 nm (C vi 1s-2p). X-ray pinhole camera images show a minimum source size of 12 μm. A Young's double slit coherence measurement gave fringe visibilities of approximately 62% for a slit separation of 10.5 μm at a distance of 31.7 cm from the source. To demonstrate the viability of the laser plasma as a source for coherent imaging applications a Gabor (in-line) hologram of two carbon fibers, of different sizes, was produced. The exposure time and the repetition rate was 2 min and 10 Hz, respectively.
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
A spatial and seasonal assessment of river water chemistry across North West England.
Rothwell, J J; Dise, N B; Taylor, K G; Allott, T E H; Scholefield, P; Davies, H; Neal, C
2010-01-15
This paper presents information on the spatial and seasonal patterns of river water chemistry at approximately 800 sites in North West England based on data from the Environment Agency regional monitoring programme. Within a GIS framework, the linkages between average water chemistry (pH, sulphate, base cations, nutrients and metals) catchment characteristics (topography, land cover, soil hydrology, base flow index and geology), rainfall, deposition chemistry and geo-spatial information on discharge consents (point sources) are examined. Water quality maps reveal that there is a clear distinction between the uplands and lowlands. Upland waters are acidic and have low concentrations of base cations, explained by background geological sources and land cover. Localised high concentrations of metals occur in areas of the Cumbrian Fells which are subjected to mining effluent inputs. Nutrient concentrations are low in the uplands with the exception sites receiving effluent inputs from rural point sources. In the lowlands, both past and present human activities have a major impact on river water chemistry, especially in the urban and industrial heartlands of Greater Manchester, south Lancashire and Merseyside. Over 40% of the sites have average orthophosphate concentrations >0.1mg-Pl(-1). Results suggest that the dominant control on orthophosphate concentrations is point source contributions from sewage effluent inputs. Diffuse agricultural sources are also important, although this influence is masked by the impact of point sources. Average nitrate concentrations are linked to the coverage of arable land, although sewage effluent inputs have a significant effect on nitrate concentrations. Metal concentrations in the lowlands are linked to diffuse and point sources. The study demonstrates that point sources, as well as diffuse sources, need to be considered when targeting measures for the effective reduction in river nutrient concentrations. This issue is clearly important with regards to the European Union Water Framework Directive, eutrophication and river water quality. Copyright 2009 Elsevier B.V. All rights reserved.
Wang, Chong
2018-03-01
In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0 is also given.
Photon migration in non-scattering tissue and the effects on image reconstruction
NASA Astrophysics Data System (ADS)
Dehghani, H.; Delpy, D. T.; Arridge, S. R.
1999-12-01
Photon propagation in tissue can be calculated using the relationship described by the transport equation. For scattering tissue this relationship is often simplified and expressed in terms of the diffusion approximation. This approximation, however, is not valid for non-scattering regions, for example cerebrospinal fluid (CSF) below the skull. This study looks at the effects of a thin clear layer in a simple model representing the head and examines its effect on image reconstruction. Specifically, boundary photon intensities (total number of photons exiting at a point on the boundary due to a source input at another point on the boundary) are calculated using the transport equation and compared with data calculated using the diffusion approximation for both non-scattering and scattering regions. The effect of non-scattering regions on the calculated boundary photon intensities is presented together with the advantages and restrictions of the transport code used. Reconstructed images are then presented where the forward problem is solved using the transport equation for a simple two-dimensional system containing a non-scattering ring and the inverse problem is solved using the diffusion approximation to the transport equation.
The first Extreme Ultraviolet Explorer source catalog
NASA Technical Reports Server (NTRS)
Bowyer, S.; Lieu, R.; Lampton, M.; Lewis, J.; Wu, X.; Drake, J. J.; Malina, R. F.
1994-01-01
The Extreme Ultraviolet Explorer (EUVE) has conducted an all-sky survey to locate and identify point sources of emission in four extreme ultraviolet wavelength bands centered at approximately 100, 200, 400, and 600 A. A companion deep survey of a strip along half the ecliptic plane was simultaneously conducted. In this catalog we report the sources found in these surveys using rigorously defined criteria uniformly applied to the data set. These are the first surveys to be made in the three longer wavelength bands, and a substantial number of sources were detected in these bands. We present a number of statistical diagnostics of the surveys, including their source counts, their sensitivites, and their positional error distributions. We provide a separate list of those sources reported in the EUVE Bright Source List which did not meet our criteria for inclusion in our primary list. We also provide improved count rate and position estimates for a majority of these sources based on the improved methodology used in this paper. In total, this catalog lists a total of 410 point sources, of which 372 have plausible optical ultraviolet, or X-ray identifications, which are also listed.
Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment
NASA Astrophysics Data System (ADS)
Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang
2016-11-01
A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Limits on Arcminute-Scale Cosmic Microwave Background Anisotropy at 28.5 GHz
NASA Technical Reports Server (NTRS)
Holzapfel, W. L.; Carlstrom, J. E.; Grego, L.; Holder, G.; Joy, M.; Reese, E. D.
2000-01-01
We have used the Berkeley-Illinois-Maryland Association (BIMA) millimeter array outfitted with sensitive centimeter-wave receivers to search for cosmic microwave background (CMB) anisotropies on arcminute scales. The interferometer was placed in a compact configuration that produces high brightness sensitivity, while providing discrimination against point sources. Operating at a frequency of 28.5 GHz, the FWHM primary beam of the instrument is approximately 6'.6. We have made sensitive images of seven fields, four of which where chosen specifically to have low infrared dust contrast and to be free of bright radio sources. Additional observations with the Owens Valley Radio Observatory (OVRO) millimeter array were used to assist in the location and removal of radio point sources. Applying a Bayesian analysis to the raw visibility data, we place limits on CMB anisotropy flat-band power of Q(sub flat) = 5.6(sub -5.6)(exp 3.0) microK and Q(sub flat) < 14.1 microK at 68% and 95% confidence, respectively. The sensitivity of this experiment to flat-band power peaks at a multipole of I = 5470, which corresponds to an angular scale of approximately 2'. The most likely value of Q(sub flat) is similar to the level of the expected secondary anisotropies.
The HST/ACS Coma Cluster Survey. II. Data Description and Source Catalogs
NASA Technical Reports Server (NTRS)
Hammer, Derek; Kleijn, Gijs Verdoes; Hoyos, Carlos; Den Brok, Mark; Balcells, Marc; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Peletier, Reynier F.;
2010-01-01
The Coma cluster, Abell 1656, was the target of a HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially-completed survey still covers approximately 50% of the core high density region in Coma. Observations were performed for twenty-five fields with a total coverage area of 274 aremin(sup 2), and extend over a wide range of cluster-centric radii (approximately 1.75 Mpe or 1 deg). The majority of the fields are located near the core region of Coma (19/25 pointings) with six additional fields in the south-west region of the cluster. In this paper we present SEXTRACTOR source catalogs generated from the processed images, including a detailed description of the methodology used for object detection and photometry, the subtraction of bright galaxies to measure faint underlying objects, and the use of simulations to assess the photometric accuracy and completeness of our catalogs. We also use simulations to perform aperture corrections for the SEXTRACTOR Kron magnitudes based only on the measured source flux and its half-light radius. We have performed photometry for 76,000 objects that consist of roughly equal numbers of extended galaxies and unresolved objects. Approximately two-thirds of all detections are brighter than F814W=26.5 mag (AB), which corresponds to the 10sigma, point-source detection limit. We estimate that Coma members are 5-10% of the source detections, including a large population of compact objects (primarily GCs, but also cEs and UCDs), and a wide variety of extended galaxies from cD galaxies to dwarf low surface brightness galaxies. The initial data release for the HST-ACS Coma Treasury program was made available to the public in August 2008. The images and catalogs described in this study relate to our second data release.
Possible Very Distant or Optically Dark Cluster of Galaxies
NASA Technical Reports Server (NTRS)
Vikhlinin, Alexey; Mushotzky, Richard (Technical Monitor)
2003-01-01
The goal of this proposal was an XMM followup observation of the extended X-ray source detected in our ROSAT PSPC cluster survey. Approximately 95% of extended X-ray sources found in the ROSAT data were optically identified as clusters of galaxies. However, we failed to find any optical counterparts for C10952-0148. Two possibilities remained prior to the XMM observation: (1) This is was a very distant or optically dark cluster of galaxies, too faint in the optical, in which case XMM would easily detect extended X-ray emission and (2) this was a group of point-like sources, blurred to a single extended source in the ROSAT data, but easily resolvable by XMM due to a better energy resolution. The XMM data have settled the case --- C10952-0148 is a group of 7 relatively bright point sources located within 1 square arcmin. All but one source have no optical counterparts down to I=22. Potentially, this can be an interesting group of quasars at a high redshift. We are planning further optical and infrared followup of this system.
A Tidal Disruption Event in a Nearby Galaxy Hosting an Intermediate Mass Black Hole
NASA Technical Reports Server (NTRS)
Donato, D; Cenko, S. B.; Covino, S.; Troja, E.; Pursimo, T.; Cheung, C. C.; Fox, O.; Kutyrev, A.; Campana, S.; Fugazza, D.;
2014-01-01
We report the serendipitous discovery of a bright point source flare in the Abell cluster A1795 with archival EUVE and Chandra observations. Assuming the EUVE emission is associated with the Chandra source, the X-ray 0.5-7 kiloelectronvolt flux declined by a factor of approximately 2300 over a time span of 6 years, following a power-law decay with index approximately equal to 2.44 plus or minus 0.40. The Chandra data alone vary by a factor of approximately 20. The spectrum is well fit by a blackbody with a constant temperature of kiloteslas approximately equal to 0.09 kiloelectronvolts (approximately equal to 10 (sup 6) Kelvin). The flare is spatially coincident with the nuclear region of a faint, inactive galaxy with a photometric redshift consistent at the 1 sigma level with the cluster (redshift = 0.062476).We argue that these properties are indicative of a tidal disruption of a star by a black hole (BH) with log(M (sub BH) / M (sub 1 solar mass)) approximately equal to 5.5 plus or minus 0.5. If so, such a discovery indicates that tidal disruption flares may be used to probe BHs in the intermediate mass range, which are very difficult to study by other means.
Non-contact local temperature measurement inside an object using an infrared point detector
NASA Astrophysics Data System (ADS)
Hisaka, Masaki
2017-04-01
Local temperature measurement in deep areas of objects is an important technique in biomedical measurement. We have investigated a non-contact method for measuring temperature inside an object using a point detector for infrared (IR) light. An IR point detector with a pinhole was constructed and the radiant IR light emitted from the local interior of the object is photodetected only at the position of pinhole located in imaging relation. We measured the thermal structure of the filament inside the miniature bulb using the IR point detector, and investigated the temperature dependence at approximately human body temperature using a glass plate positioned in front of the heat source.
Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
Probabilities for gravitational lensing by point masses in a locally inhomogeneous universe
NASA Technical Reports Server (NTRS)
Isaacson, Jeffrey A.; Canizares, Claude R.
1989-01-01
Probability functions for gravitational lensing by point masses that incorporate Poisson statistics and flux conservation are formulated in the Dyer-Roeder construction. Optical depths to lensing for distant sources are calculated using both the method of Press and Gunn (1973) which counts lenses in an otherwise empty cone, and the method of Ehlers and Schneider (1986) which projects lensing cross sections onto the source sphere. These are then used as parameters of the probability density for lensing in the case of a critical (q0 = 1/2) Friedmann universe. A comparison of the probability functions indicates that the effects of angle-averaging can be well approximated by adjusting the average magnification along a random line of sight so as to conserve flux.
NASA Technical Reports Server (NTRS)
Woronowicz, Michael
2016-01-01
Analytical expressions for column number density (CND) are developed for optical line of sight paths through a variety of steady free molecule point source models including directionally-constrained effusion (Mach number M = 0) and flow from a sonic orifice (M = 1). Sonic orifice solutions are approximate, developed using a fair simulacrum fitted to the free molecule solution. Expressions are also developed for a spherically-symmetric thermal expansion (M = 0). CND solutions are found for the most general paths relative to these sources and briefly explored. It is determined that the maximum CND from a distant location through directed effusion and sonic orifice cases occurs along the path parallel to the source plane that intersects the plume axis. For the effusive case this value is exactly twice the CND found along the ray originating from that point of intersection and extending to infinity along the plume's axis. For sonic plumes this ratio is reduced to about 4/3. For high Mach number cases the maximum CND will be found along the axial centerline path. Keywords: column number density, plume flows, outgassing, free molecule flow.
NASA Astrophysics Data System (ADS)
Schäfer, M.; Groos, L.; Forbriger, T.; Bohlen, T.
2014-09-01
Full-waveform inversion (FWI) of shallow-seismic surface waves is able to reconstruct lateral variations of subsurface elastic properties. Line-source simulation for point-source data is required when applying algorithms of 2-D adjoint FWI to recorded shallow-seismic field data. The equivalent line-source response for point-source data can be obtained by convolving the waveforms with √{t^{-1}} (t: traveltime), which produces a phase shift of π/4. Subsequently an amplitude correction must be applied. In this work we recommend to scale the seismograms with √{2 r v_ph} at small receiver offsets r, where vph is the phase velocity, and gradually shift to applying a √{t^{-1}} time-domain taper and scaling the waveforms with r√{2} for larger receiver offsets r. We call this the hybrid transformation which is adapted for direct body and Rayleigh waves and demonstrate its outstanding performance on a 2-D heterogeneous structure. The fit of the phases as well as the amplitudes for all shot locations and components (vertical and radial) is excellent with respect to the reference line-source data. An approach for 1-D media based on Fourier-Bessel integral transformation generates strong artefacts for waves produced by 2-D structures. The theoretical background for both approaches is presented in a companion contribution. In the current contribution we study their performance when applied to waves propagating in a significantly 2-D-heterogeneous structure. We calculate synthetic seismograms for 2-D structure for line sources as well as point sources. Line-source simulations obtained from the point-source seismograms through different approaches are then compared to the corresponding line-source reference waveforms. Although being derived by approximation the hybrid transformation performs excellently except for explicitly back-scattered waves. In reconstruction tests we further invert point-source synthetic seismograms by a 2-D FWI to subsurface structure and evaluate its ability to reproduce the original structural model in comparison to the inversion of line-source synthetic data. Even when applying no explicit correction to the point-source waveforms prior to inversion only moderate artefacts appear in the results. However, the overall performance is best in terms of model reproduction and ability to reproduce the original data in a 3-D simulation if inverted waveforms are obtained by the hybrid transformation.
First Year Wilkinson Microwave Anisotropy Probe(WMAP)Observations: The Angular Power Spectrum
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Spergel, D. N.; Verde, L.; Hill, R. S.; Meyer, S. S.; Barnes, C.; Bennett, C. L.; Halpern, M.; Jarosik, N.; Kogut, A.
2003-01-01
We present the angular power spectrum derived from the first-year Wilkinson Microwave Anisotropy Probe (WMAP) sky maps. We study a variety of power spectrum estimation methods and data combinations and demonstrate that the results are robust. The data are modestly contaminated by diffuse Galactic foreground emission, but we show that a simple Galactic template model is sufficient to remove the signal. Point sources produce a modest contamination in the low frequency data. After masking approximately 700 known bright sources from the maps, we estimate residual sources contribute approximately 3500 mu sq Kappa at 41 GHz, and approximately 130 mu sq Kappa at 94 GHz, to the power spectrum [iota(iota + 1)C(sub iota)/2pi] at iota = 1000. Systematic errors are negligible compared to the (modest) level of foreground emission. Our best estimate of the power spectrum is derived from 28 cross-power spectra of statistically independent channels. The final spectrum is essentially independent of the noise properties of an individual radiometer. The resulting spectrum provides a definitive measurement of the CMB power spectrum, with uncertainties limited by cosmic variance, up to iota approximately 350. The spectrum clearly exhibits a first acoustic peak at iota = 220 and a second acoustic peak at iota approximately 540, and it provides strong support for adiabatic initial conditions. Researchers have analyzed the CT(sup Epsilon) power spectrum, and present evidence for a relatively high optical depth, and an early period of cosmic reionization. Among other things, this implies that the temperature power spectrum has been suppressed by approximately 30% on degree angular scales, due to secondary scattering.
The Spitzer-IRAC Point-source Catalog of the Vela-D Cloud
NASA Astrophysics Data System (ADS)
Strafella, F.; Elia, D.; Campeggio, L.; Giannini, T.; Lorenzetti, D.; Marengo, M.; Smith, H. A.; Fazio, G.; De Luca, M.; Massi, F.
2010-08-01
This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths λ = 3.6, 4.5, 5.8, and 8.0 μm. A photometric catalog of point sources, covering a field of approximately 1.2 deg2, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 μm and 70 μm, have also been reconsidered to allow an estimate of the spectral slope of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5σ sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.
Zhang, Jie; Wang, Peng; Li, Jingyi; Mendola, Pauline; Sherman, Seth; Ying, Qi
2016-12-01
A revised Community Multiscale Air Quality (CMAQ) model was developed to simulate the emission, reactions, transport, deposition and gas-to-particle partitioning processes of 16 priority polycyclic aromatic hydrocarbons (PAHs), as described in Part I of the two-part series. The updated CMAQ model was applied in this study to quantify the contributions of different emission sources to the predicted PAH concentrations and excess cancer risk in the United States (US) in 2011. The cancer risk in the continental US due to inhalation exposure of outdoor naphthalene (NAPH) and seven larger carcinogenic PAHs (cPAHs) was predicted to be significant. The incremental lifetime cancer risk (ILCR) exceeds 1×10 -5 in many urban and industrial areas. Exposure to PAHs was estimated to result in 5704 (608-10,800) excess lifetime cancer cases. Point sources not related with energy generation and the oil and gas processes account for approximately 31% of the excess cancer cases, followed by non-road engines with 18.6% contributions. Contributions of residential wood combustion (16.2%) are similar to that of transportation-related sources (mostly motor vehicles with small contributions from railway and marine vessels; 13.4%). The oil and gas industry emissions, although large contributors to high concentrations of cPAHs regionally, are only responsible of 4.3% of the excess cancer cases, which is similar to the contributions of non-US sources (6.8%) and non-point sources (7.2%). The power generation units pose the most minimal impact on excess cancer risk, with contributions of approximately 2.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Ionization Source in the Nucleus of M84
NASA Technical Reports Server (NTRS)
Bower, G. A.; Green, R. F.; Quillen, A. C.; Danks, A.; Malumuth, E. M.; Gull, T.; Woodgate, B.; Hutchings, J.; Joseph, C.; Kaiser, M. E.
2000-01-01
We have obtained new Hubble Space Telescope (HST) observations of M84, a nearby massive elliptical galaxy whose nucleus contains a approximately 1.5 X 10(exp 9) solar mass dark compact object, which presumably is a supermassive black hole. Our Space Telescope Imaging Spectrograph (STIS) spectrum provides the first clear detection of emission lines in the blue (e.g., [0 II] lambda 3727, HBeta and [0 III] lambda lambda4959,5007), which arise from a compact region approximately 0".28 across centered on the nucleus. Our Near Infrared Camera and MultiObject Spectrometer (NICMOS) images exhibit the best view through the prominent dust lanes evident at optical wavelengths and provide a more accurate correction for the internal extinction. The relative fluxes of the emission lines we have detected in the blue together with those detected in the wavelength range 6295 - 6867 A by Bower et al. indicate that the gas at the nucleus is photoionized by a nonstellar process, instead of hot stars. Stellar absorption features from cool stars at the nucleus are very weak. We update the spectral energy distribution of the nuclear point source and find that although it is roughly flat in most bands, the optical to UV continuum is very red, similar to the spectral energy distribution of BL Lac. Thus, the nuclear point source seen in high-resolution optical images is not a star cluster but is instead a nonstellar source. Assuming isotropic emission from this source, we estimate that the ratio of bolometric luminosity to Eddington luminosity is about 5 x 10(exp -7). However, this could be underestimated if this source is a misaligned BL Lac object, which is a possibility suggested by the spectral energy distribution and the evidence of optical variability we describe.
Extended source effect on microlensing light curves by an Ellis wormhole
NASA Astrophysics Data System (ADS)
Tsukamoto, Naoki; Gong, Yungui
2018-04-01
We can survey an Ellis wormhole which is the simplest Morris-Thorne wormhole in our Galaxy with microlensing. The light curve of a point source microlensed by the Ellis wormhole shows approximately 4% demagnification while the total magnification of images lensed by a Schwarzschild lens is always larger than unity. We investigate an extended source effect on the light curves microlensed by the Ellis wormhole. We show that the depth of the gutter of the light curves of an extended source is smaller than the one of a point source since the magnified part of the extended source cancels the demagnified part out. We can, however, distinguish between the light curves of the extended source microlensed by the Ellis wormhole and the ones by the Schwarzschild lens in their shapes even if the size of the source is a few times larger than the size of an Einstein ring on a source plane. If the relative velocity of a star with the radius of 1 06 km at 8 kpc in the bulge of our Galaxy against an observer-lens system is smaller than 10 km /s on a source plane, we can detect microlensing of the star lensed by the Ellis wormhole with the throat radius of 1 km at 4 kpc.
Computational techniques in gamma-ray skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
George, D.L.
1988-12-01
Two computer codes were developed to analyze gamma-ray skyshine, the scattering of gamma photons by air molecules. A review of previous gamma-ray skyshine studies discusses several Monte Carlo codes, programs using a single-scatter model, and the MicroSkyshine program for microcomputers. A benchmark gamma-ray skyshine experiment performed at Kansas State University is also described. A single-scatter numerical model was presented which traces photons from the source to their first scatter, then applies a buildup factor along a direct path from the scattering point to a detector. The FORTRAN code SKY, developed with this model before the present study, was modified tomore » use Gauss quadrature, recent photon attenuation data and a more accurate buildup approximation. The resulting code, SILOGP, computes response from a point photon source on the axis of a silo, with and without concrete shielding over the opening. Another program, WALLGP, was developed using the same model to compute response from a point gamma source behind a perfectly absorbing wall, with and without shielding overhead. 29 refs., 48 figs., 13 tabs.« less
Dust Storm over the Middle East: Retrieval Approach, Source Identification, and Trend Analysis
NASA Astrophysics Data System (ADS)
Moridnejad, A.; Karimi, N.; Ariya, P. A.
2014-12-01
The Middle East region has been considered to be responsible for approximately 25% of the Earth's global emissions of dust particles. By developing Middle East Dust Index (MEDI) and applying to 70 dust storms characterized on MODIS images and occurred during the period between 2001 and 2012, we herein present a new high resolution mapping of major atmospheric dust source points participating in this region. To assist environmental managers and decision maker in taking proper and prioritized measures, we then categorize identified sources in terms of intensity based on extracted indices for Deep Blue algorithm and also utilize frequency of occurrence approach to find the sensitive sources. In next step, by implementing the spectral mixture analysis on the Landsat TM images (1984 and 2012), a novel desertification map will be presented. The aim is to understand how human perturbations and land-use change have influenced the dust storm points in the region. Preliminary results of this study indicate for the first time that c.a., 39 % of all detected source points are located in this newly anthropogenically desertified area. A large number of low frequency sources are located within or close to the newly desertified areas. These severely desertified regions require immediate concern at a global scale. During next 6 months, further research will be performed to confirm these preliminary results.
A Clustered Extragalactic Foreground Model for the EoR
NASA Astrophysics Data System (ADS)
Murray, S. G.; Trott, C. M.; Jordan, C. H.
2018-05-01
We review an improved statistical model of extra-galactic point-source foregrounds first introduced in Murray et al. (2017), in the context of the Epoch of Reionization. This model extends the instrumentally-convolved foreground covariance used in inverse-covariance foreground mitigation schemes, by considering the cosmological clustering of the sources. In this short work, we show that over scales of k ~ (0.6, 40.)hMpc-1, ignoring source clustering is a valid approximation. This is in contrast to Murray et al. (2017), who found a possibility of false detection if the clustering was ignored. The dominant cause for this change is the introduction of a Galactic synchrotron component which shadows the clustering of sources.
NASA Technical Reports Server (NTRS)
Mcdonald, K.; Craig, N.; Sirk, M. M.; Drake, J. J.; Fruscione, A.; Vallerga, J. V.; Malina, R. F.
1994-01-01
We report the detection of 114 extreme ultraviolet (EUV; 58 - 740 A) sources, of which 99 are new serendipitous sources, based on observations made with the imaging telescopes on board the Extreme Ultraviolet Explorer (EUVE) during the Right Angle Program (RAP). These data were obtained using the survey scanners and the Deep Survey instrument during the first year of the spectroscopic guest observer phase of the mission, from January 1993 to January 1994. The data set consists of 162 discrete pointings whose exposure times are typically two orders of magnitude longer than the average exposure times during the EUVE all-sky survey. Based on these results, we can expect that EUVE will serendipitously detect approximately 100 new EUV sources per year, or about one new EUV source per 10 sq deg, during the guest observer phase of the EUVE mission. New EUVE sources of note include one B star and three extragalactic objects. The B star (HR 2875, EUVE J0729 - 38.7) is detected in both the Lexan/B (approximately 100 A) and Al/Ti/C (approximately 200 A) bandpasses, and the detection is shown not to be a result of UV leaks. We suggest that we are detecting EUV and/or soft x rays from a companion to the B star. Three sources, EUVE J2132+10.1, EUVE J2343-14.9, and EUVE J2359-30.6 are identified as the active galactic nuclei MKN 1513, MS2340.9-1511, and 1H2354-315, respectively.
Wave Field Synthesis of moving sources with arbitrary trajectory and velocity profile.
Firtha, Gergely; Fiala, Péter
2017-08-01
The sound field synthesis of moving sound sources is of great importance when dynamic virtual sound scenes are to be reconstructed. Previous solutions considered only virtual sources moving uniformly along a straight trajectory, synthesized employing a linear loudspeaker array. This article presents the synthesis of point sources following an arbitrary trajectory. Under high-frequency assumptions 2.5D Wave Field Synthesis driving functions are derived for arbitrary shaped secondary source contours by adapting the stationary phase approximation to the dynamic description of sources in motion. It is explained how a referencing function should be chosen in order to optimize the amplitude of synthesis on an arbitrary receiver curve. Finally, a finite difference implementation scheme is considered, making the presented approach suitable for real-time applications.
Point defects in ZnO: an approach from first principles
Oba, Fumiyasu; Choi, Minseok; Togo, Atsushi; Tanaka, Isao
2011-01-01
Recent first-principles studies of point defects in ZnO are reviewed with a focus on native defects. Key properties of defects, such as formation energies, donor and acceptor levels, optical transition energies, migration energies and atomic and electronic structure, have been evaluated using various approaches including the local density approximation (LDA) and generalized gradient approximation (GGA) to DFT, LDA+U/GGA+U, hybrid Hartree–Fock density functionals, sX and GW approximation. Results significantly depend on the approximation to exchange correlation, the simulation models for defects and the post-processes to correct shortcomings of the approximation and models. The choice of a proper approach is, therefore, crucial for reliable theoretical predictions. First-principles studies have provided an insight into the energetics and atomic and electronic structures of native point defects and impurities and defect-induced properties of ZnO. Native defects that are relevant to the n-type conductivity and the non-stoichiometry toward the O-deficient side in reduced ZnO have been debated. It is suggested that the O vacancy is responsible for the non-stoichiometry because of its low formation energy under O-poor chemical potential conditions. However, the O vacancy is a very deep donor and cannot be a major source of carrier electrons. The Zn interstitial and anti-site are shallow donors, but these defects are unlikely to form at a high concentration in n-type ZnO under thermal equilibrium. Therefore, the n-type conductivity is attributed to other sources such as residual impurities including H impurities with several atomic configurations, a metastable shallow donor state of the O vacancy, and defect complexes involving the Zn interstitial. Among the native acceptor-type defects, the Zn vacancy is dominant. It is a deep acceptor and cannot produce a high concentration of holes. The O interstitial and anti-site are high in formation energy and/or are electrically inactive and, hence, are unlikely to play essential roles in electrical properties. Overall defect energetics suggests a preference for the native donor-type defects over acceptor-type defects in ZnO. The O vacancy, Zn interstitial and Zn anti-site have very low formation energies when the Fermi level is low. Therefore, these defects are expected to be sources of a strong hole compensation in p-type ZnO. For the n-type doping, the compensation of carrier electrons by the native acceptor-type defects can be mostly suppressed when O-poor chemical potential conditions, i.e. low O partial pressure conditions, are chosen during crystal growth and/or doping. PMID:27877390
Deterministic seismic hazard macrozonation of India
NASA Astrophysics Data System (ADS)
Kolathayar, Sreevalsa; Sitharam, T. G.; Vipin, K. S.
2012-10-01
Earthquakes are known to have occurred in Indian subcontinent from ancient times. This paper presents the results of seismic hazard analysis of India (6°-38°N and 68°-98°E) based on the deterministic approach using latest seismicity data (up to 2010). The hazard analysis was done using two different source models (linear sources and point sources) and 12 well recognized attenuation relations considering varied tectonic provinces in the region. The earthquake data obtained from different sources were homogenized and declustered and a total of 27,146 earthquakes of moment magnitude 4 and above were listed in the study area. The sesismotectonic map of the study area was prepared by considering the faults, lineaments and the shear zones which are associated with earthquakes of magnitude 4 and above. A new program was developed in MATLAB for smoothing of the point sources. For assessing the seismic hazard, the study area was divided into small grids of size 0.1° × 0.1° (approximately 10 × 10 km), and the hazard parameters were calculated at the center of each of these grid cells by considering all the seismic sources within a radius of 300 to 400 km. Rock level peak horizontal acceleration (PHA) and spectral accelerations for periods 0.1 and 1 s have been calculated for all the grid points with a deterministic approach using a code written in MATLAB. Epistemic uncertainty in hazard definition has been tackled within a logic-tree framework considering two types of sources and three attenuation models for each grid point. The hazard evaluation without logic tree approach also has been done for comparison of the results. The contour maps showing the spatial variation of hazard values are presented in the paper.
Overview of environmental and hydrogeologic conditions at Barrow, Alaska
McCarthy, K.A.
1994-01-01
To assist the Federal Aviation Administration (FAA) in evaluating the potential effects of environmental contamination at their facility in Barrow, Alaska, a general assessment was made of the hydrologic system is the vicinity of the installation. The City of Barrow is located approximately 16 kilometers southwest of Point Barrow, the northernmost point in Alaska, and therefore lies within the region of continuous permafrost. Migration of surface or shallow- subsurface chemical releases in this environ- ment would be largely restricted by near-surface permafrost to surface water and the upper, suprapermafrost zone of the subsurface. In the arctic climate and tundra terrain of the Barrow area, this shallow environment has a limited capacity to attenuate the effects of either physical disturbances or chemical contamination and is therefore highly susceptible to degradation. Esatkuat Lagoon, the present drink- ing water supply for the City of Barrow, is located approximately 2 kilometers from the FAA facility. This lagoon is the only practical source of drinking water available to the City of Barrow because alternative sources of water in the area are (1) frozen throughout most of the year, (2) insufficient in volume, (3) of poor quality, or (4) too costly to develop and distribute.
Perturbations of the seismic reflectivity of a fluid-saturated depth-dependent poroelastic medium.
de Barros, Louis; Dietrich, Michel
2008-03-01
Analytical formulas are derived to compute the first-order effects produced by plane inhomogeneities on the point source seismic response of a fluid-filled stratified porous medium. The derivation is achieved by a perturbation analysis of the poroelastic wave equations in the plane-wave domain using the Born approximation. This approach yields the Frechet derivatives of the P-SV- and SH-wave responses in terms of the Green's functions of the unperturbed medium. The accuracy and stability of the derived operators are checked by comparing, in the time-distance domain, differential seismograms computed from these analytical expressions with complete solutions obtained by introducing discrete perturbations into the model properties. For vertical and horizontal point forces, it is found that the Frechet derivative approach is remarkably accurate for small and localized perturbations of the medium properties which are consistent with the Born approximation requirements. Furthermore, the first-order formulation appears to be stable at all source-receiver offsets. The porosity, consolidation parameter, solid density, and mineral shear modulus emerge as the most sensitive parameters in forward and inverse modeling problems. Finally, the amplitude-versus-angle response of a thin layer shows strong coupling effects between several model parameters.
Kirchofer, Abby; Becker, Austin; Brandt, Adam; Wilcox, Jennifer
2013-07-02
The availability of industrial alkalinity sources is investigated to determine their potential for the simultaneous capture and sequestration of CO2 from point-source emissions in the United States. Industrial alkalinity sources investigated include fly ash, cement kiln dust, and iron and steel slag. Their feasibility for mineral carbonation is determined by their relative abundance for CO2 reactivity and their proximity to point-source CO2 emissions. In addition, the available aggregate markets are investigated as possible sinks for mineral carbonation products. We show that in the U.S., industrial alkaline byproducts have the potential to mitigate approximately 7.6 Mt CO2/yr, of which 7.0 Mt CO2/yr are CO2 captured through mineral carbonation and 0.6 Mt CO2/yr are CO2 emissions avoided through reuse as synthetic aggregate (replacing sand and gravel). The emission reductions represent a small share (i.e., 0.1%) of total U.S. CO2 emissions; however, industrial byproducts may represent comparatively low-cost methods for the advancement of mineral carbonation technologies, which may be extended to more abundant yet expensive natural alkalinity sources.
Theory of two-point correlations of jet noise
NASA Technical Reports Server (NTRS)
Ribner, H. S.
1976-01-01
A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.
Comparative Studies for the Sodium and Potassium Atmospheres of the Moon and Mercury
NASA Technical Reports Server (NTRS)
Smyth, William H.
1999-01-01
A summary discussion of recent sodium and potassium observations for the atmospheres of the Moon and Mercury is presented with primary emphasis on new full-disk images that have become available for sodium. For the sodium atmosphere, image observations for both the Moon and Mercury are fitted with model calculations (1) that have the same source speed distribution, one recently measured for electron-stimulated desorption and thought to apply equally well to photon-stimulated desorption, (2) that have similar average surface sodium fluxes, about 2.8 x 10(exp 5) to 8.9 x 10(exp 5) atoms cm(exp -2)s(exp -1) for the Moon and approximately 3.5 x 10(exp 5) to 1.4 x 10(exp 6) atoms cm(exp -2)s(exp -1) for Mercury, but (3) that have very different distributions for the source surface area. For the Moon, a sunlit hemispherical surface source of between approximately 5.3 x 10(exp 22) to 1.2 x 10(exp 23) atoms/s is required with a spatial dependence at least as sharp as the square of the cosine of the solar zenith angle. For Mercury, a time dependent source that varies from 1.5 x 10(exp 22) to 5.8 x l0(exp 22) atoms/s is required which is confined to a small surface area located at, but asymmetrically distributed about, the subsolar point. The nature of the Mercury source suggest that the planetary magnetopause near the subsolar point acts as a time varying and partially protective shield through which charged particles may pass to interact with and liberate gas from the planetary surface. Suggested directions for future research activities are discussed.
VizieR Online Data Catalog: Radio continuum survey of Kepler K2 mission Field 1 (Tingay+, 2016)
NASA Astrophysics Data System (ADS)
Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.
2016-10-01
We describe contemporaneous observations of K2 Field 1 with the Murchison Widefield Array (MWA) and historical (from 2010-2012) observations from the Tata Institute of Fundamental Research (TIFR) Giant Metrewave Radio Telescope (GMRT) Sky Survey (TGSS; http://tgss.ncra.tifr.res.in/), via the TGSS Alternative Data Release 1 (ADR1; Intema et al. 2016, in prep.). The MWA and GMRT are radio telescopes operating at low radio frequencies (approximately 140-200MHz for the work described here). K2 mission Campaign 1 was conducted on Field 1 (center at R.A.=11:35:45.51; decl.=+01:25:02.28; J2000), covering the North Galactic Cap, between 2014 May 30 and August 21. The parameters of MWA observations are described in Table1, showing the 15 observations conducted over a period of approximately one month in 2014 June and July. All observations were made in a standard MWA imaging mode with a 30.72MHz bandwidth consisting of 24 contiguous 1.28MHz "coarse channels", each divided into 32 "fine channels" each of 40kHz bandwidth (total of 768 fine channels across 30.72MHz). The temporal resolution of the MWA correlator output was set to 0.5s. All observations were made in full polarimetric mode, with all Stokes parameters formed from the orthogonal linearly polarized feeds. Observations were made at two center frequencies, 154.88 and 185.60MHz, with two 296s observations of the K2 field at each frequency on each night of observation, accompanied by observations of one of three calibrators (Centaurus A, Virgo A, or Hydra A) at each frequency, with 112s observations. The observed fields were tracked, and thus, due to the fixed delay settings available to point the MWA primary beam, the tracked R.A. and decl. changes slightly between different observations (always a very small change compared to the MWA field of view). The total volume of MWA visibility data processed was approximately 2.2TB. A full survey of the radio sky at 150MHz as visible from the Giant Metrewave Radio (GMRT) was performed within the scope of the PI-driven TGSS project between 2010 and early 2012, covering the declination range -55° to +90°. Summarizing the observational parameters as given on the TGSS project website (http://tgss.ncra.tifr.res.in/150MHz/obsstrategy.html), the survey consists of more than 5000 pointings on an approximate hexagonal grid. Data were recorded in full polarization (RR, LL, RL, LR) every 2s, in 256 frequency channels across 16MHz of bandwidth (140-156MHz). Each pointing was observed for about 15 minutes, split over three or more scans spaced in time to improve UV-coverage. Typically, 20-40 pointings were grouped together into single night-time observing sessions, bracketed and interleaved by primary (flux density and bandpass) calibrator scans on 3C48, 3C147, and/or 3C286. Interleaving secondary (phase) calibrator scans on a variety of standard phase calibrators were also included, but were typically too faint to be of significant benefit at these frequencies. A source catalog was produced from each of the two frequencies of MWA data (see table2) and the single TGSS image (see table3). The final set of MWA images after source finding yields a total of 1085 radio sources at 154MHz, and 1468 at 185MHz over 314 square degrees, at angular resolutions of ~4'. The GMRT images, after source finding, yields a total of 7445 radio sources over the same field, at an angular resolution of ~0.3'. Thus, the overall survey covers multiple epochs of observation, spans approximately 140-200MHz, is sensitive to structures on angular scales from arcseconds to degrees, and is contemporaneous with the K2 observations of the field over a period of approximately one month. (4 data files).
Improved selection criteria for H II regions, based on IRAS sources
NASA Astrophysics Data System (ADS)
Yan, Qing-Zeng; Xu, Ye; Walsh, A. J.; Macquart, J. P.; MacLeod, G. C.; Zhang, Bo; Hancock, P. J.; Chen, Xi; Tang, Zheng-Hong
2018-05-01
We present new criteria for selecting H II regions from the Infrared Astronomical Satellite (IRAS) Point Source Catalogue (PSC), based on an H II region catalogue derived manually from the all-sky Wide-field Infrared Survey Explorer (WISE). The criteria are used to augment the number of H II region candidates in the Milky Way. The criteria are defined by the linear decision boundary of two samples: IRAS point sources associated with known H II regions, which serve as the H II region sample, and IRAS point sources at high Galactic latitudes, which serve as the non-H II region sample. A machine learning classifier, specifically a support vector machine, is used to determine the decision boundary. We investigate all combinations of four IRAS bands and suggest that the optimal criterion is log(F_{60}/F_{12})≥ ( -0.19 × log(F_{100}/F_{25})+ 1.52), with detections at 60 and 100 {μ}m. This selects 3041 H II region candidates from the IRAS PSC. We find that IRAS H II region candidates show evidence of evolution on the two-colour diagram. Merging the WISE H II catalogue with IRAS H II region candidates, we estimate a lower limit of approximately 10 200 for the number of H II regions in the Milky Way.
NASA Astrophysics Data System (ADS)
Olsen, Nils; Ravat, Dhananjay; Finlay, Christopher C.; Kother, Livia K.
2017-12-01
We derive a new model, named LCS-1, of Earth's lithospheric field based on four years (2006 September-2010 September) of magnetic observations taken by the CHAMP satellite at altitudes lower than 350 km, as well as almost three years (2014 April-2016 December) of measurements taken by the two lower Swarm satellites Alpha and Charlie. The model is determined entirely from magnetic 'gradient' data (approximated by finite differences): the north-south gradient is approximated by first differences of 15 s along-track data (for CHAMP and each of the two Swarm satellites), while the east-west gradient is approximated by the difference between observations taken by Swarm Alpha and Charlie. In total, we used 6.2 mio data points. The model is parametrized by 35 000 equivalent point sources located on an almost equal-area grid at a depth of 100 km below the surface (WGS84 ellipsoid). The amplitudes of these point sources are determined by minimizing the misfit to the magnetic satellite 'gradient' data together with the global average of |Br| at the ellipsoid surface (i.e. applying an L1 model regularization of Br). In a final step, we transform the point-source representation to a spherical harmonic expansion. The model shows very good agreement with previous satellite-derived lithospheric field models at low degree (degree correlation above 0.8 for degrees n ≤ 133). Comparison with independent near-surface aeromagnetic data from Australia yields good agreement (coherence >0.55) at horizontal wavelengths down to at least 250 km, corresponding to spherical harmonic degree n ≈ 160. The LCS-1 vertical component and field intensity anomaly maps at Earth's surface show similar features to those exhibited by the WDMAM2 and EMM2015 lithospheric field models truncated at degree 185 in regions where they include near-surface data and provide unprecedented detail where they do not. Example regions of improvement include the Bangui anomaly region in central Africa, the west African cratons, the East African Rift region, the Bay of Bengal, the southern 90°E ridge, the Cretaceous quiet zone south of the Walvis Ridge and the younger parts of the South Atlantic.
NASA Technical Reports Server (NTRS)
Byrne, K. P.; Marshall, S. E.
1983-01-01
A procedure for experimentally determining, in terms of the particle motions, the shapes of the low order acoustic modes in enclosures is described. The procedure is based on finding differentiable functions which approximate the shape functions of the low order acoustic modes when these modes are defined in terms of the acoustic pressure. The differentiable approximating functions are formed from polynomials which are fitted by a least squares procedure to experimentally determined values which define the shapes of the low order acoustic modes in terms of the acoustic pressure. These experimentally determined values are found by a conventional technique in which the transfer functions, which relate the acoustic pressures at an array of points in the enclosure to the volume velocity of a fixed point source, are measured. The gradient of the function which approximates the shape of a particular mode in terms of the acoustic pressure is evaluated to give the mode shape in terms of the particle motion. The procedure was tested by using it to experimentally determine the shapes of the low order acoustic modes in a small rectangular enclosure.
Not just a drop in the bucket: expanding access to point-of-use water treatment systems.
Mintz, E; Bartram, J; Lochery, P; Wegelin, M
2001-10-01
Since 1990, the number of people without access to safe water sources has remained constant at approximately 1.1 billion, of whom approximately 2.2 million die of waterborne disease each year. In developing countries, population growth and migrations strain existing water and sanitary infrastructure and complicate planning and construction of new infrastructure. Providing safe water for all is a long-term goal; however, relying only on time- and resource-intensive centralized solutions such as piped, treated water will leave hundreds of millions of people without safe water far into the future. Self-sustaining, decentralized approaches to making drinking water safe, including point-of-use chemical and solar disinfection, safe water storage, and behavioral change, have been widely field-tested. These options target the most affected, enhance health, contribute to development and productivity, and merit far greater priority for rapid implementation.
Not Just a Drop in the Bucket: Expanding Access to Point-of-Use Water Treatment Systems
Mintz, Eric; Bartram, Jamie; Lochery, Peter; Wegelin, Martin
2001-01-01
Since 1990, the number of people without access to safe water sources has remained constant at approximately 1.1 billion, of whom approximately 2.2 million die of waterborne disease each year. In developing countries, population growth and migrations strain existing water and sanitary infrastructure and complicate planning and construction of new infrastructure. Providing safe water for all is a long-term goal; however, relying only on time- and resource-intensive centralized solutions such as piped, treated water will leave hundreds of millions of people without safe water far into the future. Self-sustaining, decentralized approaches to making drinking water safe, including point-of-use chemical and solar disinfection, safe water storage, and behavioral change, have been widely field-tested. These options target the most affected, enhance health, contribute to development and productivity, and merit far greater priority for rapid implementation. PMID:11574307
An Optimal Design for Placements of Tsunami Observing Systems Around the Nankai Trough, Japan
NASA Astrophysics Data System (ADS)
Mulia, I. E.; Gusman, A. R.; Satake, K.
2017-12-01
Presently, there are numerous tsunami observing systems deployed in several major tsunamigenic regions throughout the world. However, documentations on how and where to optimally place such measurement devices are limited. This study presents a methodological approach to select the best and fewest observation points for the purpose of tsunami source characterizations, particularly in the form of fault slip distributions. We apply the method to design a new tsunami observation network around the Nankai Trough, Japan. In brief, our method can be divided into two stages: initialization and optimization. The initialization stage aims to identify favorable locations of observation points, as well as to determine the initial number of observations. These points are generated based on extrema of an empirical orthogonal function (EOF) spatial modes derived from 11 hypothetical tsunami events in the region. In order to further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search (MADS) to remove redundant measurements from the initially generated points by the first stage. A combinatorial search by the MADS will improve the accuracy and reduce the number of observations simultaneously. The EOF analysis of the hypothetical tsunamis using first 2 leading modes with 4 extrema on each mode results in 30 observation points spread along the trench. This is obtained after replacing some clustered points within the radius of 30 km with only one representative. Furthermore, the MADS optimization can improve the accuracy of the EOF-generated points by approximately 10-20% with fewer observations (23 points). Finally, we compare our result with the existing observation points (68 stations) in the region. The result shows that the optimized design with fewer number of observations can produce better source characterizations with approximately 20-60% improvement of accuracies at all the 11 hypothetical cases. It should be note, however, that our design is a tsunami-based approach, some of the existing observing systems are equipped with additional devices to measure other parameter of interests, i.e., for monitoring seismic activities.
The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding
NASA Technical Reports Server (NTRS)
Mgana, C. V. M.; Chang, I. D.
1982-01-01
The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.
NASA Astrophysics Data System (ADS)
Bambi, Cosimo; Modesto, Leonardo; Wang, Yixu
2017-01-01
We derive and study an approximate static vacuum solution generated by a point-like source in a higher derivative gravitational theory with a pair of complex conjugate ghosts. The gravitational theory is local and characterized by a high derivative operator compatible with Lee-Wick unitarity. In particular, the tree-level two-point function only shows a pair of complex conjugate poles besides the massless spin two graviton. We show that singularity-free black holes exist when the mass of the source M exceeds a critical value Mcrit. For M >Mcrit the spacetime structure is characterized by an outer event horizon and an inner Cauchy horizon, while for M =Mcrit we have an extremal black hole with vanishing Hawking temperature. The evaporation process leads to a remnant that approaches the zero-temperature extremal black hole state in an infinite amount of time.
Design and evaluation of an imaging spectrophotometer incorporating a uniform light source.
Noble, S D; Brown, R B; Crowe, T G
2012-03-01
Accounting for light that is diffusely scattered from a surface is one of the practical challenges in reflectance measurement. Integrating spheres are commonly used for this purpose in point measurements of reflectance and transmittance. This solution is not directly applicable to a spectral imaging application for which diffuse reflectance measurements are desired. In this paper, an imaging spectrophotometer design is presented that employs a uniform light source to provide diffuse illumination. This creates the inverse measurement geometry to the directional illumination/diffuse reflectance mode typically used for point measurements. The final system had a spectral range between 400 and 1000 nm with a 5.2 nm resolution, a field of view of approximately 0.5 m by 0.5 m, and millimeter spatial resolution. Testing results indicate illumination uniformity typically exceeding 95% and reflectance precision better than 1.7%.
NASA Astrophysics Data System (ADS)
Alexandrov, A. N.; Zhdanov, V. I.; Koval, S. M.
We derive approximate formulas for the coordinates and magnification of critical images of a point source in a vicinity of a cusp caustic arising in the gravitational lens mapping. In the lowest (zero-order) approximation, these formulas were obtained in the classical work by Schneider&Weiss (1992) and then studied by a number of authors; first-order corrections in powers of the proximity parameter were treated by Congdon, Keeton and Nordgren. We have shown that the first-order corrections are solely due to the asymmetry of the cusp. We found expressions for the second-order corrections in the case of general lens potential and for an arbitrary position of the source near a symmetric cusp. Applications to a lensing galaxy model represented by a singular isothermal sphere with an external shear y are studied and the role of the second-order corrections is demonstrated.
Accuracy of the Kirchoff formula in determining acoustic shielding with the use of a flat plate
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Davis, J. E.
1977-01-01
It has been suggested that if jet engines of aircraft were placed at above the wing instead of below it, the wing would provide a partial shielding of the noise generated by the engines relative to observers on the ground. The shielding effects of an idealized three-dimensional barrier in the presence of an idealized engine noise source was predicted by the Kirchoff formula. Based on the good agreement between experimental measurements and the numerical results of the current study, it was concluded that the Kirchoff approximation provides a good qualitative estimate of the acoustic shielding of a point source by a rectangular flat plate for measurements taken in the far field of the flat plate at frequencies ranging from 1 kHz to 20 kHz. At frequencies greater than 4 kHz the Kirchoff approximation provides accurate quantitative predictions of acoustic shielding.
Effects of agriculture upon the air quality and climate: research, policy, and regulations.
Aneja, Viney P; Schlesinger, William H; Erisman, Jan Willem
2009-06-15
Scientific assessments of agricultural air quality, including estimates of emissions and potential sequestration of greenhouse gases, are an important emerging area of environmental science that offers significant challenges to policy and regulatory authorities. Improvements are needed in measurements, modeling, emission controls, and farm operation management. Controlling emissions of gases and particulate matter from agriculture is notoriously difficult as this sector affects the most basic need of humans, i.e., food. Current policies combine an inadequate science covering a very disparate range of activities in a complex industry with social and political overlays. Moreover, agricultural emissions derive from both area and point sources. In the United States, agricultural emissions play an important role in several atmospherically mediated processes of environmental and public health concerns. These atmospheric processes affect local and regional environmental quality, including odor, particulate matter (PM) exposure, eutrophication, acidification, exposure to toxics, climate, and pathogens. Agricultural emissions also contribute to the global problems caused by greenhouse gas emissions. Agricultural emissions are variable in space and time and in how they interact within the various processes and media affected. Most important in the U.S. are ammonia (where agriculture accounts for approximately 90% of total emissions), reduced sulfur (unquantified), PM25 (approximately 16%), PM110 (approximately 18%), methane (approximately 29%), nitrous oxide (approximately 72%), and odor and emissions of pathogens (both unquantified). Agriculture also consumes fossil fuels for fertilizer production and farm operations, thus emitting carbon dioxide (CO2), oxides of nitrogen (NO(x)), sulfur oxides (SO(x)), and particulates. Current research priorities include the quantification of point and nonpoint sources, the biosphere-atmosphere exchange of ammonia, reduced sulfur compounds, volatile organic compounds, greenhouse gases, odor and pathogens, the quantification of landscape processes, and the primary and secondary emissions of PM. Given the serious concerns raised regarding the amount and the impacts of agricultural air emissions, policies must be pursued and regulations must be enacted in orderto make real progress in reducing these emissions and their associated environmental impacts.
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
Lessons Learned from OMI Observations of Point Source SO2 Pollution
NASA Technical Reports Server (NTRS)
Krotkov, N.; Fioletov, V.; McLinden, Chris
2011-01-01
The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.
Status Of The Swift Burst Alert Telescope Hard X-ray Transient Monitor
NASA Astrophysics Data System (ADS)
Krimm, Hans A.; Barthelmy, S. D.; Baumgartner, W. H.; Cummings, J.; Fenimore, E.; Gehrels, N.; Markwardt, C. B.; Palmer, D.; Sakamoto, T.; Skinner, G. K.; Stamatikos, M.; Tueller, J.
2010-01-01
The Swift Burst Alert Telescope hard X-ray transient monitor has been operating since October 1, 2006. More than 700 sources are tracked on a daily basis and light curves are produced and made available to the public on two time scales: a single Swift pointing (approximately 20 minutes) and the weighted average for each day. Of the monitored sources, approximately 33 are detected daily and another 100 have had one or more outburst during the Swift mission. The monitor is also sensitive to the detection of previously undiscovered sources and we have reported the discovery of four galactic sources and one source in the Large Magellanic Cloud. Follow-up target of opportunity observations with Swift and the Rossi X-Ray Timing Explorer have revealed that three of these new sources are pulsars and two are black hole candidates. In addition, the monitor has led to the announcement of significant outbursts from 24 different galactic and extra-galactic sources, many of which have had follow-up Swift XRT, UVOT and ground based multi-wavelength observations. The transient monitor web pages currently receive an average of 21 visits per day. We will report on the most important results from the transient monitor and also on detection and exposure statistics and outline recent and planned improvements to the monitor. The transient monitor web page is http://swift.gsfc.nasa.gov/docs/swift/results/transients/.
THE SPITZER-IRAC POINT-SOURCE CATALOG OF THE VELA-D CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strafella, F.; Elia, D.; Campeggio, L., E-mail: francesco.strafella@le.infn.i, E-mail: loretta.campeggio@le.infn.i, E-mail: eliad@oal.ul.p
2010-08-10
This paper presents the observations of Cloud D in the Vela Molecular Ridge, obtained with the Infrared Array Camera (IRAC) camera on board the Spitzer Space Telescope at the wavelengths {lambda} = 3.6, 4.5, 5.8, and 8.0 {mu}m. A photometric catalog of point sources, covering a field of approximately 1.2 deg{sup 2}, has been extracted and complemented with additional available observational data in the millimeter region. Previous observations of the same region, obtained with the Spitzer MIPS camera in the photometric bands at 24 {mu}m and 70 {mu}m, have also been reconsidered to allow an estimate of the spectral slopemore » of the sources in a wider spectral range. A total of 170,299 point sources, detected at the 5{sigma} sensitivity level in at least one of the IRAC bands, have been reported in the catalog. There were 8796 sources for which good quality photometry was obtained in all four IRAC bands. For this sample, a preliminary characterization of the young stellar population based on the determination of spectral slope is discussed; combining this with diagnostics in the color-magnitude and color-color diagrams, the relative population of young stellar objects (YSOs) in different evolutionary classes has been estimated and a total of 637 candidate YSOs have been selected. The main differences in their relative abundances have been highlighted and a brief account for their spatial distribution is given. The star formation rate has also been estimated and compared with the values derived for other star-forming regions. Finally, an analysis of the spatial distribution of the sources by means of the two-point correlation function shows that the younger population, constituted by the Class I and flat-spectrum sources, is significantly more clustered than the Class II and III sources.« less
NASA Astrophysics Data System (ADS)
Smith, David R.; Gowda, Vinay R.; Yurduseven, Okan; Larouche, Stéphane; Lipworth, Guy; Urzhumov, Yaroslav; Reynolds, Matthew S.
2017-01-01
Wireless power transfer (WPT) has been an active topic of research, with a number of WPT schemes implemented in the near-field (coupling) and far-field (radiation) regimes. Here, we consider a beamed WPT scheme based on a dynamically reconfigurable source aperture transferring power to receiving devices within the Fresnel region. In this context, the dynamic aperture resembles a reconfigurable lens capable of focusing power to a well-defined spot, whose dimension can be related to a point spread function. The necessary amplitude and phase distribution of the field imposed over the aperture can be determined in a holographic sense, by interfering a hypothetical point source located at the receiver location with a plane wave at the aperture location. While conventional technologies, such as phased arrays, can achieve the required control over phase and amplitude, they typically do so at a high cost; alternatively, metasurface apertures can achieve dynamic focusing with potentially lower cost. We present an initial tradeoff analysis of the Fresnel region WPT concept assuming a metasurface aperture, relating the key parameters such as spot size, aperture size, wavelength, and focal distance, as well as reviewing system considerations such as the availability of sources and power transfer efficiency. We find that approximate design formulas derived from the Gaussian optics approximation provide useful estimates of system performance, including transfer efficiency and coverage volume. The accuracy of these formulas is confirmed through numerical studies.
NASA Astrophysics Data System (ADS)
Yang, Yang; Chu, Zhigang; Shen, Linbang; Ping, Guoli; Xu, Zhongming
2018-07-01
Being capable of demystifying the acoustic source identification result fast, Fourier-based deconvolution has been studied and applied widely for the delay and sum (DAS) beamforming with two-dimensional (2D) planar arrays. It is, however so far, still blank in the context of spherical harmonics beamforming (SHB) with three-dimensional (3D) solid spherical arrays. This paper is motivated to settle this problem. Firstly, for the purpose of determining the effective identification region, the premise of deconvolution, a shift-invariant point spread function (PSF), is analyzed with simulations. To make the premise be satisfied approximately, the opening angle in elevation dimension of the surface of interest should be small, while no restriction is imposed to the azimuth dimension. Then, two kinds of deconvolution theories are built for SHB using the zero and the periodic boundary conditions respectively. Both simulations and experiments demonstrate that the periodic boundary condition is superior to the zero one, and fits the 3D acoustic source identification with solid spherical arrays better. Finally, four periodic boundary condition based deconvolution methods are formulated, and their performance is disclosed both with simulations and experimentally. All the four methods offer enhanced spatial resolution and reduced sidelobe contaminations over SHB. The recovered source strength approximates to the exact one multiplied with a coefficient that is the square of the focus distance divided by the distance from the source to the array center, while the recovered pressure contribution is scarcely affected by the focus distance, always approximating to the exact one.
AEGIS-X: Deep Chandra Imaging of the Central Groth Strip
NASA Astrophysics Data System (ADS)
Nandra, K.; Laird, E. S.; Aird, J. A.; Salvato, M.; Georgakakis, A.; Barro, G.; Perez-Gonzalez, P. G.; Barmby, P.; Chary, R.-R.; Coil, A.; Cooper, M. C.; Davis, M.; Dickinson, M.; Faber, S. M.; Fazio, G. G.; Guhathakurta, P.; Gwyn, S.; Hsu, L.-T.; Huang, J.-S.; Ivison, R. J.; Koo, D. C.; Newman, J. A.; Rangel, C.; Yamada, T.; Willmer, C.
2015-09-01
We present the results of deep Chandra imaging of the central region of the Extended Groth Strip, the AEGIS-X Deep (AEGIS-XD) survey. When combined with previous Chandra observations of a wider area of the strip, AEGIS-X Wide (AEGIS-XW), these provide data to a nominal exposure depth of 800 ks in the three central ACIS-I fields, a region of approximately 0.29 deg2. This is currently the third deepest X-ray survey in existence; a factor ∼ 2-3 shallower than the Chandra Deep Fields (CDFs), but over an area ∼3 times greater than each CDF. We present a catalog of 937 point sources detected in the deep Chandra observations, along with identifications of our X-ray sources from deep ground-based, Spitzer, GALEX, and Hubble Space Telescope imaging. Using a likelihood ratio analysis, we associate multiband counterparts for 929/937 of our X-ray sources, with an estimated 95% reliability, making the identification completeness approximately 94% in a statistical sense. Reliable spectroscopic redshifts for 353 of our X-ray sources are available predominantly from Keck (DEEP2/3) and MMT Hectospec, so the current spectroscopic completeness is ∼38%. For the remainder of the X-ray sources, we compute photometric redshifts based on multiband photometry in up to 35 bands from the UV to mid-IR. Particular attention is given to the fact that the vast majority the X-ray sources are active galactic nuclei and require hybrid templates. Our photometric redshifts have mean accuracy of σ =0.04 and an outlier fraction of approximately 5%, reaching σ =0.03 with less than 4% outliers in the area covered by CANDELS . The X-ray, multiwavelength photometry, and redshift catalogs are made publicly available.
Wiley, Joshua S; Shelley, Jacob T; Cooks, R Graham
2013-07-16
We describe a handheld, wireless low-temperature plasma (LTP) ambient ionization source and its performance on a benchtop and a miniature mass spectrometer. The source, which is inexpensive to build and operate, is battery-powered and utilizes miniature helium cylinders or air as the discharge gas. Comparison of a conventional, large-scale LTP source against the handheld LTP source, which uses less helium and power than the large-scale version, revealed that the handheld source had similar or slightly better analytical performance. Another advantage of the handheld LTP source is the ability to quickly interrogate a gaseous, liquid, or solid sample without requiring any setup time. A small, 7.4-V Li-polymer battery is able to sustain plasma for 2 h continuously, while the miniature helium cylinder supplies gas flow for approximately 8 continuous hours. Long-distance ion transfer was achieved for distances up to 1 m.
NASA Astrophysics Data System (ADS)
Kingston, Andrew M.; Myers, Glenn R.; Latham, Shane J.; Li, Heyang; Veldkamp, Jan P.; Sheppard, Adrian P.
2016-10-01
With the GPU computing becoming main-stream, iterative tomographic reconstruction (IR) is becoming a com- putationally viable alternative to traditional single-shot analytical methods such as filtered back-projection. IR liberates one from the continuous X-ray source trajectories required for analytical reconstruction. We present a family of novel X-ray source trajectories for large-angle CBCT. These discrete (sparsely sampled) trajectories optimally fill the space of possible source locations by maximising the degree of mutually independent information. They satisfy a discrete equivalent of Tuy's sufficiency condition and allow high cone-angle (high-flux) tomog- raphy. The highly isotropic nature of the trajectory has several advantages: (1) The average source distance is approximately constant throughout the reconstruction volume, thus avoiding the differential-magnification artefacts that plague high cone-angle helical computed tomography; (2) Reduced streaking artifacts due to e.g. X-ray beam-hardening; (3) Misalignment and component motion manifests as blur in the tomogram rather than double-edges, which is easier to automatically correct; (4) An approximately shift-invariant point-spread-function which enables filtering as a pre-conditioner to speed IR convergence. We describe these space-filling trajectories and demonstrate their above-mentioned properties compared with a traditional helical trajectories.
Relationship between mass-flux reduction and source-zone mass removal: analysis of field data.
Difilippo, Erica L; Brusseau, Mark L
2008-05-26
The magnitude of contaminant mass-flux reduction associated with a specific amount of contaminant mass removed is a key consideration for evaluating the effectiveness of a source-zone remediation effort. Thus, there is great interest in characterizing, estimating, and predicting relationships between mass-flux reduction and mass removal. Published data collected for several field studies were examined to evaluate relationships between mass-flux reduction and source-zone mass removal. The studies analyzed herein represent a variety of source-zone architectures, immiscible-liquid compositions, and implemented remediation technologies. There are two general approaches to characterizing the mass-flux-reduction/mass-removal relationship, end-point analysis and time-continuous analysis. End-point analysis, based on comparing masses and mass fluxes measured before and after a source-zone remediation effort, was conducted for 21 remediation projects. Mass removals were greater than 60% for all but three of the studies. Mass-flux reductions ranging from slightly less than to slightly greater than one-to-one were observed for the majority of the sites. However, these single-snapshot characterizations are limited in that the antecedent behavior is indeterminate. Time-continuous analysis, based on continuous monitoring of mass removal and mass flux, was performed for two sites, both for which data were obtained under water-flushing conditions. The reductions in mass flux were significantly different for the two sites (90% vs. approximately 8%) for similar mass removals ( approximately 40%). These results illustrate the dependence of the mass-flux-reduction/mass-removal relationship on source-zone architecture and associated mass-transfer processes. Minimal mass-flux reduction was observed for a system wherein mass removal was relatively efficient (ideal mass-transfer and displacement). Conversely, a significant degree of mass-flux reduction was observed for a site wherein mass removal was inefficient (non-ideal mass-transfer and displacement). The mass-flux-reduction/mass-removal relationship for the latter site exhibited a multi-step behavior, which cannot be predicted using some of the available simple estimation functions.
NASA Astrophysics Data System (ADS)
Cotté, B.
2018-05-01
This study proposes to couple a source model based on Amiet's theory and a parabolic equation code in order to model wind turbine noise emission and propagation in an inhomogeneous atmosphere. Two broadband noise generation mechanisms are considered, namely trailing edge noise and turbulent inflow noise. The effects of wind shear and atmospheric turbulence are taken into account using the Monin-Obukhov similarity theory. The coupling approach, based on the backpropagation method to preserve the directivity of the aeroacoustic sources, is validated by comparison with an analytical solution for the propagation over a finite impedance ground in a homogeneous atmosphere. The influence of refraction effects is then analyzed for different directions of propagation. The spectrum modification related to the ground effect and the presence of a shadow zone for upwind receivers are emphasized. The validity of the point source approximation that is often used in wind turbine noise propagation models is finally assessed. This approximation exaggerates the interference dips in the spectra, and is not able to correctly predict the amplitude modulation.
The Prediction of Scattered Broadband Shock-Associated Noise
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.
Lactose intolerance: diagnosis, genetic, and clinical factors
Mattar, Rejane; de Campos Mazo, Daniel Ferraz; Carrilho, Flair José
2012-01-01
Most people are born with the ability to digest lactose, the major carbohydrate in milk and the main source of nutrition until weaning. Approximately 75% of the world’s population loses this ability at some point, while others can digest lactose into adulthood. This review discusses the lactase-persistence alleles that have arisen in different populations around the world, diagnosis of lactose intolerance, and its symptomatology and management. PMID:22826639
Seismic reflection study of Flathead Lake, Montana
Wold, Richard J.
1982-01-01
A seismic reflection survey of Flathead Lake, Montana, was carried out in 1970 to study the geologic structure underlying the lake. Approximately 200 km of track lines were surveyed resulting in about 140 km of useable data (Fig. 1). A one cu. in. air gun was used as the energy source. Navigation was by a series of theodolite sitings of the boat from pairs of shore-based control points.
Project SQUID. Quarterly Progress Report
1949-07-01
the sodium line reversal method for flame temperature determination ., Determination of Point Temperatures in Turbulent Flames Using the Sodium Line...taken to determine the approximate position of the line. Then, with the G-M tube in position and using the photo graph as an indicator, the region... beams are wide, the latter yielding a greater source of X-rays. Hence, by using that window yielding the broadest beam greater intensity of X-rays
Baran, Timothy M.; Foster, Thomas H.
2011-01-01
We present a new Monte Carlo model of cylindrical diffusing fibers that is implemented with a graphics processing unit. Unlike previously published models that approximate the diffuser as a linear array of point sources, this model is based on the construction of these fibers. This allows for accurate determination of fluence distributions and modeling of fluorescence generation and collection. We demonstrate that our model generates fluence profiles similar to a linear array of point sources, but reveals axially heterogeneous fluorescence detection. With axially homogeneous excitation fluence, approximately 90% of detected fluorescence is collected by the proximal third of the diffuser for μs'/μa = 8 in the tissue and 70 to 88% is collected in this region for μs'/μa = 80. Increased fluorescence detection by the distal end of the diffuser relative to the center section is also demonstrated. Validation of these results was performed by creating phantoms consisting of layered fluorescent regions. Diffusers were inserted into these layered phantoms and fluorescence spectra were collected. Fits to these spectra show quantitative agreement between simulated fluorescence collection sensitivities and experimental results. These results will be applicable to the use of diffusers as detectors for dosimetry in interstitial photodynamic therapy. PMID:21895311
The Detection of Circumnuclear X-Ray Emission from the Seyfert Galaxy NGC 3516
NASA Technical Reports Server (NTRS)
George, I. M.; Turner, T. J.; Netzer, H.; Kraemer, S. B.; Ruiz, J.; Chelouche, D.; Crenshaw, D. M.; Yaqoob, T.; Nandra, K.; Mushotzky, R. F.;
2001-01-01
We present the first high-resolution, X-ray image of the circumnuclear regions of the Seyfert 1 galaxy NGC 3516, using the Chandra X-ray Observatory (CXO). All three of the CXO observations reported were performed with one of the two grating assemblies in place, and here we restrict our analysis to undispersed photons (i.e. those detected in the zeroth-order). A previously-unknown X-ray source is detected approximately 6 arcsec (1.1h(sub 75)(exp -1) kpc) NNE of the nucleus (position angle approximately 29 degrees) which we designate CXOU 110648.1 + 723412. Its spectrum can be characterized as a power law with a photon index (Gamma) approximately 1.8 - 2.6, or as thermal emission with a temperature kT approximately 0.7 - 3 keV. Assuming a location within NGC 3516, isotropic emission implies a luminosity L approximately 2 - 8 x 10(exp 39)h(sub 75)(exp-2) erg s(exp -1) in the 0.4 - 2 keV band. If due to a single point source, the object is super-Eddington for a 1.4 solar mass neutron star. However, multiple sources or a small, extended source cannot be excluded using the current data. Large-scale extended S-ray emission is also detected out to approximately 10 arcsec (approximately 2h(sub 75)(exp -1) kpc) from the nucleus to the NE and SW, and is approximately aligned with the morphologies of the radio emission and extended narrow emission line region (ENLR). The mean luminosity of this emission is 1 - 5 x 10(exp 37)h(sub 75)(exp -2) erg s(exp -1) arcsec(exp -2), in the 0.4 - 2 keV band. Unfortunately the current data cannot usefully constrain its spectrum. These results are consistent with earlier suggestions of circumnuclear X-ray emissi in NGC 3516 based on ROSAT observations, and thus provide the first clear detection of extended X-ray emission in a Seyfert 1.0 galaxy. If the extended emission is due to scattering of the nuclear X-ray continuum, then the pressure in the X-ray emitting gas is at least two orders of magnitude too small to provide the confining medium for the ENLR clouds.
User's guide for RAM. Volume II. Data preparation and listings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D.B.; Novak, J.H.
1978-11-01
The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie
2010-05-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.
Radiant Temperature Nulling Radiometer
NASA Technical Reports Server (NTRS)
Ryan, Robert (Inventor)
2003-01-01
A self-calibrating nulling radiometer for non-contact temperature measurement of an object, such as a body of water, employs a black body source as a temperature reference, an optomechanical mechanism, e.g., a chopper, to switch back and forth between measuring the temperature of the black body source and that of a test source, and an infrared detection technique. The radiometer functions by measuring radiance of both the test and the reference black body sources; adjusting the temperature of the reference black body so that its radiance is equivalent to the test source; and, measuring the temperature of the reference black body at this point using a precision contact-type temperature sensor, to determine the radiative temperature of the test source. The radiation from both sources is detected by an infrared detector that converts the detected radiation to an electrical signal that is fed with a chopper reference signal to an error signal generator, such as a synchronous detector, that creates a precision rectified signal that is approximately proportional to the difference between the temperature of the reference black body and that of the test infrared source. This error signal is then used in a feedback loop to adjust the reference black body temperature until it equals that of the test source, at which point the error signal is nulled to zero. The chopper mechanism operates at one or more Hertz allowing minimization of l/f noise. It also provides pure chopping between the black body and the test source and allows continuous measurements.
McCarthy, Kathleen A.; Alvarez, David A.
2014-01-01
The Eugene Water & Electric Board (EWEB) supplies drinking water to approximately 200,000 people in Eugene, Oregon. The sole source of this water is the McKenzie River, which has consistently excellent water quality relative to established drinking-water standards. To ensure that this quality is maintained as land use in the source basin changes and water demands increase, EWEB has developed a proactive management strategy that includes a combination of conventional point-in-time discrete water sampling and time‑integrated passive sampling with a combination of chemical analyses and bioassays to explore water quality and identify where vulnerabilities may lie. In this report, we present the results from six passive‑sampling deployments at six sites in the basin, including the intake and outflow from the EWEB drinking‑water treatment plant (DWTP). This is the first known use of passive samplers to investigate both the source and finished water of a municipal DWTP. Results indicate that low concentrations of several polycyclic aromatic hydrocarbons and organohalogen compounds are consistently present in source waters, and that many of these compounds are also present in finished drinking water. The nature and patterns of compounds detected suggest that land-surface runoff and atmospheric deposition act as ongoing sources of polycyclic aromatic hydrocarbons, some currently used pesticides, and several legacy organochlorine pesticides. Comparison of results from point-in-time and time-integrated sampling indicate that these two methods are complementary and, when used together, provide a clearer understanding of contaminant sources than either method alone.
NASA Astrophysics Data System (ADS)
Chen, Li-si; Hu, Zhong-wen
2017-10-01
The image evaluation of an optical system is the core of optical design. Based on the analysis and comparison of the PSSN (Normalized Point Source Sensitivity) proposed in the image evaluation of the TMT (Thirty Meter Telescope) and the common image evaluation methods, the application of PSSN in the TMT WFOS (Wide Field Optical Spectrometer) is studied. It includes an approximate simulation of the atmospheric seeing, the effect of the displacement of M3 in the TMT on the PSSN of the system, the effect of the displacement of collimating mirror in the WFOS on the PSSN of the system, the relations between the PSSN and the zenith angle under different conditions of atmospheric turbulence, and the relation between the PSSN and the wavefront aberration. The results show that the PSSN is effective for the image evaluation of the TMT under a limited atmospheric seeing.
NASA Astrophysics Data System (ADS)
Jeffery, David J.; Mazzali, Paolo A.
2007-08-01
Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.
NASA Astrophysics Data System (ADS)
Li, Zhiyuan; Yuan, Zibing; Li, Ying; Lau, Alexis K. H.; Louie, Peter K. K.
2015-12-01
Atmospheric particulate matter (PM) pollution is a major public health concern in Hong Kong. In this study, the spatiotemporal variations of health risks from ambient PM10 from seven air quality monitoring stations between 2000 and 2011 were analyzed. Positive matrix factorization (PMF) was adopted to identify major source categories of ambient PM10 and quantify their contributions. Afterwards, a point-estimated risk model was used to identify the inhalation cancer and non-cancer risks of PM10 sources. The long-term trends of the health risks from classified local and non-local sources were explored. Furthermore, the reason for the increase of health risks during high PM10 days was discussed. Results show that vehicle exhaust source was the dominant inhalation cancer risk (ICR) contributor (72%), whereas trace metals and vehicle exhaust sources contributed approximately 27% and 21% of PM10 inhalation non-cancer risk (INCR), respectively. The identified local sources accounted for approximately 80% of the ICR in Hong Kong, while contribution percentages of the non-local and local sources for INCR are comparable. The clear increase of ICR at high PM days was mainly attributed to the increase of contributions from coal combustion/biomass burning and secondary sulfate, while the increase of INCR at high PM days was attributed to the increase of contributions from the sources coal combustion/biomass burning, secondary nitrate, and trace metals. This study highlights the importance of health risk-based source apportionment in air quality management with protecting human health as the ultimate target.
The Atacama Cosmology Telescope: Development and preliminary results of point source observations
NASA Astrophysics Data System (ADS)
Fisher, Ryan P.
2009-06-01
The Atacama Cosmology Telescope (ACT) is a six meter diameter telescope designed to measure the millimeter sky with arcminute angular resolution. The instrument is currently conducting its third season of observations from Cerro Toco in the Chilean Andes. The primary science goal of the experiment is to expand our understanding of cosmology by mapping the temperature fluctuations of the Cosmic Microwave Background (CMB) at angular scales corresponding to multipoles up to [cursive l] ~ 10000. The primary receiver for current ACT observations is the Millimeter Bolometer Array Camera (MBAC). The instrument is specially designed to observe simultaneously at 148 GHz, 218 GHz and 277 GHz. To accomplish this, the camera has three separate detector arrays, each containing approximately 1000 detectors. After discussing the ACT experiment in detail, a discussion of the development and testing of the cold readout electronics for the MBAC is presented. Currently, the ACT collaboration is in the process of generating maps of the microwave sky using our first and second season observations. The analysis used to generate these maps requires careful data calibration to produce maps of the arcminute scale CMB temperature fluctuations. Tests and applications of several elements of the ACT calibrations are presented in the context of the second season observations. Scientific exploration has already begun on preliminary maps made using these calibrations. The final portion of this thesis is dedicated to discussing the point sources observed by the ACT. A discussion of the techniques used for point source detection and photometry is followed by a presentation of our current measurements of point source spectral indices.
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, Maha
1998-01-01
A fundamental goal of magnetospheric physics is to understand the transport of plasma through the solar wind-magnetosphere-ionosphere system. To attain such an understanding, we must determine the sources of the plasma, the trajectories of the particles through the magnetospheric electric and magnetic fields to the point of observation, and the acceleration processes they undergo enroute. This study employed plasma distributions observed in the near-Earth plasma sheet by Interball and Geotail spacecraft together with theoretical techniques to investigate the ion sources and the transport of plasma. We used ion trajectory calculations in magnetic and electric fields from a global Magnetohydrodynamics (MHD) simulation to investigate the transport and to identify common ion sources for ions observed in the near-Earth magnetotail by the Interball and Geotail spacecraft. Our first step was to examine a number of distribution functions and identify distinct boundaries in both configuration and phase space that are indicative of different plasma sources and transport mechanisms. We examined events from October 26, 1995, November 29-30, 1996, and December 22, 1996. During the first event Interball and Geotail were separated by approximately 10 R(sub E) in z, and during the second event the spacecraft were separated by approximately 4(sub RE). Both of these events had a strong IMF By component pointing toward the dawnside. On October 26, 1995, the IMF B(sub Z) component was northward, and on November 1-9-30, 1996, the IMF B sub Z) component was near 0. During the first event, Geotail was located near the equator on the dawn flank, while Interball was for the most part in the lobe region. The distribution function from the Coral instrument on Interball showed less structure and resembled a drifting Maxwellian. The observed distribution on Geotail, on the other hand, included a great number of structures at both low and high energies. During the third event (December 22, 1996) both spacecraft were in the plasma sheet and were separated bY approximately 20 R(sub E) in the y direction. During this event the IMF was southward.
Herschel Key Program Heritage: a Far-Infrared Source Catalog for the Magellanic Clouds
NASA Astrophysics Data System (ADS)
Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta; Babler, Brian; Engelbracht, Charles W.; Gordon, Karl; Hony, Sacha; Misselt, Karl; Montiel, Edward; Okumura, Koryo; Panuzzo, Pasquale; Roman-Duval, Julia; Sauvage, Marc; Boyer, Martha L.; Chen, C.-H. Rosie; Indebetouw, Remy; Matsuura, Mikako; Oliveira, Joana M.; Srinivasan, Sundar; van Loon, Jacco Th.; Whitney, Barbara; Woods, Paul M.
2014-12-01
Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from the Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.
Light extraction block with curved surface
Levermore, Peter; Krall, Emory; Silvernail, Jeffrey; Rajan, Kamala; Brown, Julia J.
2016-03-22
Light extraction blocks, and OLED lighting panels using light extraction blocks, are described, in which the light extraction blocks include various curved shapes that provide improved light extraction properties compared to parallel emissive surface, and a thinner form factor and better light extraction than a hemisphere. Lighting systems described herein may include a light source with an OLED panel. A light extraction block with a three-dimensional light emitting surface may be optically coupled to the light source. The three-dimensional light emitting surface of the block may includes a substantially curved surface, with further characteristics related to the curvature of the surface at given points. A first radius of curvature corresponding to a maximum principal curvature k.sub.1 at a point p on the substantially curved surface may be greater than a maximum height of the light extraction block. A maximum height of the light extraction block may be less than 50% of a maximum width of the light extraction block. Surfaces with cross sections made up of line segments and inflection points may also be fit to approximated curves for calculating the radius of curvature.
Effects of Adaptive Antenna Arrays on Broadband Signals.
1980-06-01
dimensional array geometry. The signal impinging on the antenna array elements is assumed to have originated from a point source in the far field , or...tg9 (4) The assumptions used to identify the far field region of an array also lead to an approximation for ti(6) . It is ti (0 ) x i sin(e) (5) c...implementing the open form transfer function and coefficients of Eqs (16) 53 .. ... ... .. . . .. . . .. ... .. . ..... . .... . . .. through (21). For a
Generalised photon skyshine calculations.
Hayes, Robert
2004-01-01
The energy-dependent dose contributions from monoenergetic photon source points located 1.5 m above the ground have been tabulated. These values are intended to be used for regulatory compliance with site boundary dose limitations and as such are all presented in effective dose units. Standard air and soil are modelled where the air has vertical density gradient approximation. Energies from 0.05 up to 10 MeV are evaluated for dose transport up to 40 mean free paths.
Whitehead, P G; Jin, L; Crossman, J; Comber, S; Johnes, P J; Daldorph, P; Flynn, N; Collins, A L; Butterfield, D; Mistry, R; Bardon, R; Pope, L; Willows, R
2014-05-15
The issues of diffuse and point source phosphorus (P) pollution in the Hampshire Avon and Blashford Lakes are explored using a catchment model of the river system. A multibranch, process based, dynamic water quality model (INCA-P) has been applied to the whole river system to simulate water fluxes, total phosphorus (TP) and soluble reactive phosphorus (SRP) concentrations and ecology. The model has been used to assess impacts of both agricultural runoff and point sources from waste water treatment plants (WWTPs) on water quality. The results show that agriculture contributes approximately 40% of the phosphorus load and point sources the other 60% of the load in this catchment. A set of scenarios have been investigated to assess the impacts of alternative phosphorus reduction strategies and it is shown that a combined strategy of agricultural phosphorus reduction through either fertiliser reductions or better phosphorus management together with improved treatment at WWTPs would reduce the SRP concentrations in the river to acceptable levels to meet the EU Water Framework Directive (WFD) requirements. A seasonal strategy for WWTP phosphorus reductions would achieve significant benefits at reduced cost. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zurek, Sebastian; Guzik, Przemyslaw; Pawlak, Sebastian; Kosmider, Marcin; Piskorski, Jaroslaw
2012-12-01
We explore the relation between correlation dimension, approximate entropy and sample entropy parameters, which are commonly used in nonlinear systems analysis. Using theoretical considerations we identify the points which are shared by all these complexity algorithms and show explicitly that the above parameters are intimately connected and mutually interdependent. A new geometrical interpretation of sample entropy and correlation dimension is provided and the consequences for the interpretation of sample entropy, its relative consistency and some of the algorithms for parameter selection for this quantity are discussed. To get an exact algorithmic relation between the three parameters we construct a very fast algorithm for simultaneous calculations of the above, which uses the full time series as the source of templates, rather than the usual 10%. This algorithm can be used in medical applications of complexity theory, as it can calculate all three parameters for a realistic recording of 104 points within minutes with the use of an average notebook computer.
Li, Zhouyuan; Liu, Xuehua; Niu, Tianlin; Kejia, De; Zhou, Qingping; Ma, Tianxiao; Gao, Yunyang
2015-05-19
The source region of the Yellow River, China, experienced degradation during the 1980s and 1990s, but effective ecological restoration projects have restored the alpine grassland ecosystem. The local government has taken action to restore the grassland area since 1996. Remote sensing monitoring results show an initial restoration of this alpine grassland ecosystem with the structural transformation of land cover from 2000 to 2009 as low- and high-coverage grassland recovered. From 2000 to 2009, the low-coverage grassland area expanded by over 25% and the bare soil area decreased by approximately 15%. To examine the relationship between ecological structure and function, surface temperature (Ts) and evapotranspiration (ET) levels were estimated to study the dynamics of the hydro-heat pattern. The results show a turning point in approximately the year 2000 from a declining ET to a rising ET, eventually reaching the 1990 level of approximately 1.5 cm/day. We conclude that grassland coverage expansion has improved the regional hydrologic cycle as a consequence of ecological restoration. Thus, we suggest that long-term restoration and monitoring efforts would help maintain the climatic adjustment functions of this alpine grassland ecosystem.
Extrapolation of rotating sound fields.
Carley, Michael
2018-03-01
A method is presented for the computation of the acoustic field around a tonal circular source, such as a rotor or propeller, based on an exact formulation which is valid in the near and far fields. The only input data required are the pressure field sampled on a cylindrical surface surrounding the source, with no requirement for acoustic velocity or pressure gradient information. The formulation is approximated with exponentially small errors and appears to require input data at a theoretically minimal number of points. The approach is tested numerically, with and without added noise, and demonstrates excellent performance, especially when compared to extrapolation using a far-field assumption.
Satellite radar interferometry measures deformation at Okmok Volcano
Lu, Zhong; Mann, Dorte; Freymueller, Jeff
1998-01-01
The center of the Okmok caldera in Alaska subsided 140 cm as a result of its February– April 1997 eruption, according to satellite data from ERS-1 and ERS-2 synthetic aperture radar (SAR) interferometry. The inferred deflationary source was located 2.7 km beneath the approximate center of the caldera using a point source deflation model. Researchers believe this source is a magma chamber about 5 km from the eruptive source vent. During the 3 years before the eruption, the center of the caldera uplifted by about 23 cm, which researchers believe was a pre-emptive inflation of the magma chamber. Scientists say such measurements demonstrate that radar interferometry is a promising spaceborne technique for monitoring remote volcanoes. Frequent, routine acquisition of images with SAR interferometry could make near realtime monitoring at such volcanoes the rule, aiding in eruption forecasting.
Versatile plasma ion source with an internal evaporator
NASA Astrophysics Data System (ADS)
Turek, M.; Prucnal, S.; Drozdziel, A.; Pyszniak, K.
2011-04-01
A novel construction of an ion source with an evaporator placed inside a plasma chamber is presented. The crucible is heated to high temperatures directly by arc discharge, which makes the ion source suitable for substances with high melting points. The compact ion source enables production of intense ion beams for wide spectrum of solid elements with typical separated beam currents of ˜100-150 μA for Al +, Mn +, As + (which corresponds to emission current densities of 15-25 mA/cm 2) for the extraction voltage of 25 kV. The ion source works for approximately 50-70 h at 100% duty cycle, which enables high ion dose implantation. The typical power consumption of the ion source is 350-400 W. The paper presents detailed experimental data (e.g. dependences of ion currents and anode voltages on discharge and filament currents and magnetic flux densities) for Cr, Fe, Al, As, Mn and In. The discussion is supported by results of Monte Carlo method based numerical simulation of ionisation in the ion source.
Discovery of an Inner Disk Component Around HD 141569 A
NASA Technical Reports Server (NTRS)
Konishi, Mihoko; Grady, Carol A.; Schneider, Glenn; Shibai, Hiroshi; McElwain, Michael W.; Nesvold, Erika R.; Kuchner, Marc J.; Carson, Joseph; Debes, John H.; Gaspar, Andras;
2016-01-01
We report the discovery of a scattering component around the HD 141569 A circumstellar debris system, interior to the previously known inner ring. The discovered inner disk component, obtained in broadband optical light with Hubble Space Telescope/Space Telescope Imaging Spectrograph coronagraphy, was imaged with an inner working angle of 0 25 arcseconds, and can be traced from 0 4 seconds (approximately 46 atomic units) to 1.0 arcseconds (approximately 116 atomic units) after deprojection using inclination = 55 degrees. The inner disk component is seen to forward scatter in a manner similar to the previously known rings, has a pericenter offset of approximately 6 atomic units, and break points where the slope of the surface brightness changes. It also has a spiral arm trailing in the same sense as other spiral arms and arcs seen at larger stellocentric distances. The inner disk spatially overlaps with the previously reported warm gas disk seen in thermal emission. We detect no point sources within 2 arcseconds (approximately 232 atomic units), in particular in the gap between the inner disk component and the inner ring. Our upper limit of 9 plus or minus 3 mass Jupiter (M (sub J)) is augmented by a new dynamical limit on single planetary mass bodies in the gap between the inner disk component and the inner ring of 1 mass Jupiter, which is broadly consistent with previous estimates.
Initial conditions for critical Higgs inflation
NASA Astrophysics Data System (ADS)
Salvio, Alberto
2018-05-01
It has been pointed out that a large non-minimal coupling ξ between the Higgs and the Ricci scalar can source higher derivative operators, which may change the predictions of Higgs inflation. A variant, called critical Higgs inflation, employs the near-criticality of the top mass to introduce an inflection point in the potential and lower drastically the value of ξ. We here study whether critical Higgs inflation can occur even if the pre-inflationary initial conditions do not satisfy the slow-roll behavior (retaining translation and rotation symmetries). A positive answer is found: inflation turns out to be an attractor and therefore no fine-tuning of the initial conditions is necessary. A very large initial Higgs time-derivative (as compared to the potential energy density) is compensated by a moderate increase in the initial field value. These conclusions are reached by solving the exact Higgs equation without using the slow-roll approximation. This also allows us to consistently treat the inflection point, where the standard slow-roll approximation breaks down. Here we make use of an approach that is independent of the UV completion of gravity, by taking initial conditions that always involve sub-planckian energies.
NASA Astrophysics Data System (ADS)
Chao, Nan; Liu, Yong-kuo; Xia, Hong; Ayodeji, Abiodun; Bai, Lu
2018-03-01
During the decommissioning of nuclear facilities, a large number of cutting and demolition activities are performed, which results in a frequent change in the structure and produce many irregular objects. In order to assess dose rates during the cutting and demolition process, a flexible dose assessment method for arbitrary geometries and radiation sources was proposed based on virtual reality technology and Point-Kernel method. The initial geometry is designed with the three-dimensional computer-aided design tools. An approximate model is built automatically in the process of geometric modeling via three procedures namely: space division, rough modeling of the body and fine modeling of the surface, all in combination with collision detection of virtual reality technology. Then point kernels are generated by sampling within the approximate model, and when the material and radiometric attributes are inputted, dose rates can be calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The effectiveness and accuracy of the proposed method was verified by means of simulations using different geometries and the dose rate results were compared with that derived from CIDEC code, MCNP code and experimental measurements.
The bead on a rotating hoop revisited: an unexpected resonance
NASA Astrophysics Data System (ADS)
Raviola, Lisandro A.; Véliz, Maximiliano E.; Salomone, Horacio D.; Olivieri, Néstor A.; Rodríguez, Eduardo E.
2017-01-01
The bead on a rotating hoop is a typical problem in mechanics, frequently posed to junior science and engineering students in basic physics courses. Although this system has a rich dynamics, it is usually not analysed beyond the point particle approximation in undergraduate textbooks, nor empirically investigated. Advanced textbooks show the existence of bifurcations owing to the system's nonlinear nature, and some papers demonstrate, from a theoretical standpoint, its points of contact with phase transition phenomena. However, scarce experimental research has been conducted to better understand its behaviour. We show in this paper that a minor modification to the problem leads to appealing consequences that can be studied both theoretically and empirically with the basic conceptual tools and experimental skills available to junior students. In particular, we go beyond the point particle approximation by treating the bead as a rigid spherical body, and explore the effect of a slightly non-vertical hoop's rotation axis that gives rise to a resonant behaviour not considered in previous works. This study can be accomplished by means of digital video and open source software. The experience can motivate an engaging laboratory project by integrating standard curriculum topics, data analysis and experimental exploration.
Widmer, Jocelyn M.; Weppelmann, Thomas A.; Alam, Meer T.; Morrissey, B. David; Redden, Edsel; Rashid, Mohammed H.; Diamond, Ulrica; Ali, Afsar; De Rochars, Madsen Beau; Blackburn, Jason K.; Johnson, Judith A.; Morris, J. Glenn
2014-01-01
We inventoried non-surface water sources in the Leogane and Gressier region of Haiti (approximately 270 km2) in 2012 and 2013 and screened water from 345 sites for fecal coliforms and Vibrio cholerae. An international organization/non-governmental organization responsible for construction could be identified for only 56% of water points evaluated. Sixteen percent of water points were non-functional at any given time; 37% had evidence of fecal contamination, with spatial clustering of contaminated sites. Among improved water sources (76% of sites), 24.6% had fecal coliforms versus 80.9% in unimproved sources. Fecal contamination levels increased significantly from 36% to 51% immediately after the passage of Tropical Storm Sandy in October of 2012, with a return to 34% contamination in March of 2013. Long-term sustainability of potable water delivery at a regional scale requires ongoing assessment of water quality, functionality, and development of community-based management schemes supported by a national plan for the management of potable water. PMID:25071005
Widmer, Jocelyn M; Weppelmann, Thomas A; Alam, Meer T; Morrissey, B David; Redden, Edsel; Rashid, Mohammed H; Diamond, Ulrica; Ali, Afsar; De Rochars, Madsen Beau; Blackburn, Jason K; Johnson, Judith A; Morris, J Glenn
2014-10-01
We inventoried non-surface water sources in the Leogane and Gressier region of Haiti (approximately 270 km(2)) in 2012 and 2013 and screened water from 345 sites for fecal coliforms and Vibrio cholerae. An international organization/non-governmental organization responsible for construction could be identified for only 56% of water points evaluated. Sixteen percent of water points were non-functional at any given time; 37% had evidence of fecal contamination, with spatial clustering of contaminated sites. Among improved water sources (76% of sites), 24.6% had fecal coliforms versus 80.9% in unimproved sources. Fecal contamination levels increased significantly from 36% to 51% immediately after the passage of Tropical Storm Sandy in October of 2012, with a return to 34% contamination in March of 2013. Long-term sustainability of potable water delivery at a regional scale requires ongoing assessment of water quality, functionality, and development of community-based management schemes supported by a national plan for the management of potable water. © The American Society of Tropical Medicine and Hygiene.
Method of forming pointed structures
NASA Technical Reports Server (NTRS)
Pugel, Diane E. (Inventor)
2011-01-01
A method of forming an array of pointed structures comprises depositing a ferrofluid on a substrate, applying a magnetic field to the ferrofluid to generate an array of surface protrusions, and solidifying the surface protrusions to form the array of pointed structures. The pointed structures may have a tip radius ranging from approximately 10 nm to approximately 25 micron. Solidifying the surface protrusions may be carried out at a temperature ranging from approximately 10 degrees C. to approximately 30 degrees C.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta
Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from themore » Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.« less
Rice University observations of the galactic center
NASA Technical Reports Server (NTRS)
Meegan, C. A.
1978-01-01
The most sensitive of the four balloon fight observations of the galactic center made by Rice University was conducted in 1974 from Rio Cuarto, Argentina at a float altitude of 4 mbar. The count rate spectrum of the observed background and the energy spectrum of the galactic center region are discussed. The detector used consists of a 6 inch Nal(T 1ambda) central detector collimated to approximately 15 deg FWHM by a Nal(T lamdba) anticoincidence shield. The shield in at least two interaction mean free paths thick at all gamma ray energies. The instrumental resolution is approximately 11% FWHM at 662 keV. Pulses from the central detector are analyzed by two 256 channel PHA's covering the energy range approximately 20 keV to approximately 12 MeV. The detector is equatorially mounted and pointed by command from the ground. Observations are made by measuring source and background alternately for 10 minute periods. Background is measured by rotating the detector 180 deg about the azimuthal axis.
Galactic Starburst NGC 3603 from X-Rays to Radio
NASA Technical Reports Server (NTRS)
Moffat, A. F. J.; Corcoran, M. F.; Stevens, I. R.; Skalkowski, G.; Marchenko, S. V.; Muecke, A.; Ptak, A.; Koribalski, B. S.; Brenneman, L.; Mushotzky, R.;
2002-01-01
NGC 3603 is the most massive and luminous visible starburst region in the Galaxy. We present the first Chandra/ACIS-I X-ray image and spectra of this dense, exotic object, accompanied by deep cm-wavelength ATCA radio image at similar or less than 1 inch spatial resolution, and HST/ground-based optical data. At the S/N greater than 3 level, Chandra detects several hundred X-ray point sources (compared to the 3 distinct sources seen by ROSAT). At least 40 of these sources are definitely associated with optically identified cluster O and WR type members, but most are not. A diffuse X-ray component is also seen out to approximately 2 feet (4 pc) form the center, probably arising mainly from the large number of merging/colliding hot stellar winds and/or numerous faint cluster sources. The point-source X-ray fluxes generally increase with increasing bolometric brightnesses of the member O/WR stars, but with very large scatter. Some exceptionally bright stellar X-ray sources may be colliding wind binaries. The radio image shows (1) two resolved sources, one definitely non-thermal, in the cluster core near where the X-ray/optically brightest stars with the strongest stellar winds are located, (2) emission from all three known proplyd-like objects (with thermal and non-thermal components, and (3) many thermal sources in the peripheral regions of triggered star-formation. Overall, NGC 3603 appears to be a somewhat younger and hotter, scaled-down version of typical starbursts found in other galaxies.
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
A method on error analysis for large-aperture optical telescope control system
NASA Astrophysics Data System (ADS)
Su, Yanrui; Wang, Qiang; Yan, Fabao; Liu, Xiang; Huang, Yongmei
2016-10-01
For large-aperture optical telescope, compared with the performance of azimuth in the control system, arc second-level jitters exist in elevation under different speeds' working mode, especially low-speed working mode in the process of its acquisition, tracking and pointing. The jitters are closely related to the working speed of the elevation, resulting in the reduction of accuracy and low-speed stability of the telescope. By collecting a large number of measured data to the elevation, we do analysis on jitters in the time domain, frequency domain and space domain respectively. And the relation between jitter points and the leading speed of elevation and the corresponding space angle is concluded that the jitters perform as periodic disturbance in space domain and the period of the corresponding space angle of the jitter points is 79.1″ approximately. Then we did simulation, analysis and comparison to the influence of the disturbance sources, like PWM power level output disturbance, torque (acceleration) disturbance, speed feedback disturbance and position feedback disturbance on the elevation to find that the space periodic disturbance still exist in the elevation performance. It leads us to infer that the problems maybe exist in angle measurement unit. The telescope employs a 24-bit photoelectric encoder and we can calculate the encoder grating angular resolution as 79.1016'', which is as the corresponding angle value in the whole encoder system of one period of the subdivision signal. The value is approximately equal to the space frequency of the jitters. Therefore, the working elevation of the telescope is affected by subdivision errors and the period of the subdivision error is identical to the period of encoder grating angular. Through comprehensive consideration and mathematical analysis, that DC subdivision error of subdivision error sources causes the jitters is determined, which is verified in the practical engineering. The method that analyze error sources from time domain, frequency domain and space domain respectively has a very good role in guiding to find disturbance sources for large-aperture optical telescope.
Equivalent source modeling of the core magnetic field using magsat data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.
1983-01-01
Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.
Very high resolution observations of SS433 at 10.65 GHz
NASA Technical Reports Server (NTRS)
Geldzahler, B. J.; Downes, A. J. B.; Shaffer, D. B.
1981-01-01
Observations of SS433 made on June 12, 1979, from West Germany, Massachusetts, and West Virginia are discussed. It is noted that SS433 did not show fringes on any baseline although all the calibration sources were seen at their expected strengths. The measured total flux density of SS433 was found to be approximately 0.5 Jy, consistent with previous observations. The source was observed by on-offs at each telescope, which indicates that they were all pointed properly during the observations. The absence of fringes is not attributed to poor observing conditions or instrumental difficulties. It is concluded that if all the 10.65 GHz radiation emanates from a single component, then that component is at least 0.005 arcsec (approximately 10 to the 14th cm) in size. The measurements made on more sensitive intercontinental baselines indicate that there is no component of SS433 smaller than 0.001 arcsec emitting 10.65 GHz radiation above a level of 50 mJy.
NASA Astrophysics Data System (ADS)
Kuriyama, M.; Kumamoto, T.; Fujita, M.
2005-12-01
The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.
Multivariate Probabilistic Analysis of an Hydrological Model
NASA Astrophysics Data System (ADS)
Franceschini, Samuela; Marani, Marco
2010-05-01
Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.
NASA Technical Reports Server (NTRS)
Lehmer, B. D.; Berkeley, M.; Zezas, A.; Alexander, D. M.; Basu-Zych, A.; Bauer, F. E.; Brandt, W. N.; Fragos, T.; Hornschemeier, A. E.; Kalogera, V.;
2014-01-01
We present direct constraints on how the formation of low-mass X-ray binary (LMXB) populations in galactic fields depends on stellar age. In this pilot study, we utilize Chandra and Hubble Space Telescope (HST) data to detect and characterize the X-ray point source populations of three nearby early-type galaxies: NGC 3115, 3379, and 3384. The luminosity-weighted stellar ages of our sample span approximately equal to 3-10 Gyr. X-ray binary population synthesis models predict that the field LMXBs associated with younger stellar populations should be more numerous and luminous per unit stellar mass than older populations due to the evolution of LMXB donor star masses. Crucially, the combination of deep Chandra and HST observations allows us to test directly this prediction by identifying and removing counterparts to X-ray point sources that are unrelated to the field LMXB populations, including LMXBs that are formed dynamically in globular clusters, Galactic stars, and background AGN/galaxies. We find that the "young" early-type galaxy NGC 3384 (approximately equals 2-5 Gyr) has an excess of luminous field LMXBs (L(sub x) approximately greater than (5-10) × 10(exp 37) erg s(exp -1)) per unit K-band luminosity (L(sub K); a proxy for stellar mass) than the "old" early-type galaxies NGC 3115 and 3379 (approximately equals 8-10 Gyr), which results in a factor of 2-3 excess of L(sub X)/L(sub K) for NGC 3384. This result is consistent with the X-ray binary population synthesis model predictions; however, our small galaxy sample size does not allow us to draw definitive conclusions on the evolution field LMXBs in general. We discuss how future surveys of larger galaxy samples that combine deep Chandra and HST data could provide a powerful new benchmark for calibrating X-ray binary population synthesis models.
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot
2014-03-01
The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campigotto, M.C.; Diaferio, A.; Hernandez, X.
We discuss the phenomenology of gravitational lensing in the purely metric f (χ) gravity, an f ( R ) gravity where the action of the gravitational field depends on the source mass. We focus on the strong lensing regime in galaxy-galaxy lens systems and in clusters of galaxies. By adopting point-like lenses and using an approximate metric solution accurate to second order of the velocity field v / c , we show how, in the f (χ) = χ{sup 3/2} gravity, the same light deflection can be produced by lenses with masses smaller than in General Relativity (GR); this massmore » difference increases with increasing impact parameter and decreasing lens mass. However, for sufficiently massive point-like lenses and small impact parameters, f (χ) = χ{sup 3/2} and GR yield indistinguishable light deflection angles: this regime occurs both in observed galaxy-galaxy lens systems and in the central regions of galaxy clusters. In the former systems, the GR and f (χ) masses are compatible with the mass of standard stellar populations and little or no dark matter, whereas, on the scales of the core of galaxy clusters, the presence of substantial dark matter is required by our point-like lenses both in GR and in our approximate f (χ) = χ{sup 3/2} solution. We thus conclude that our approximate metric solution of f (χ) = χ{sup 3/2} is unable to describe the observed phenomenology of the strong lensing regime without the aid of dark matter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in themore » reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.« less
Identification of the Hard X-Ray Source Dominating the E > 25 keV Emission of the Nearby Galaxy M31
NASA Technical Reports Server (NTRS)
Yukita, M.; Ptak, A.; Hornschemeier, A. E.; Wik, D.; Maccarone, T.J.; Pottschmidt, Katja; Zezas, A.; Antoniou, V.; Ballhausen, R.; Lehmer, B.D.;
2017-01-01
We report the identification of a bright hard X-ray source dominating the M31 bulge above 25 kiloelectronvolts from a simultaneous NuSTAR-Swift observation. We find that this source is the counterpart to Swift J0042.6+4112, which was previously detected in the Swift BAT All-Sky Hard X-Ray Survey. This Swift BAT source had been suggested to be the combined emission from a number of point sources; our new observations have identified a single X-ray source from 0.5 to 50 kiloelectronvolts as the counterpart for the first time. In the 0.5-10 kiloelectronvolt band, the source had been classified as an X-ray Binary candidate in various Chandra and XMM-Newton studies; however, since it was not clearly associated with Swift J0042.6+4112, the previous E is less than 10 kiloelectronvolts observations did not generate much attention. This source has a spectrum with a soft X-ray excess (kT approximately equal to 0.2 kiloelectronvolts) plus a hard spectrum with a power law of gamma approximately equal to 1 and a cutoff around 15-20 kiloelectronvolts, typical of the spectral characteristics of accreting pulsars. Unfortunately, any potential pulsation was undetected in the NuSTAR data, possibly due to insufficient photon statistics. The existing deep HST (Hubble Space Telescope) images exclude high-mass (greater than 3 times the radius of the moon) donors at the location of this source. The best interpretation for the nature of this source is an X-ray pulsar with an intermediate-mass (less than 3 times the radius of the moon M) companion or a symbiotic X-ray binary. We discuss other possibilities in more detail.
Identification of the Hard X-Ray Source Dominating the E > 25 keV Emission of the Nearby Galaxy M31
NASA Technical Reports Server (NTRS)
Yukita, M.; Ptak, A.; Hornschemeier, A. E.; Wik, D.; Maccarone, T. J.; Pottschmidt, K.; Zezas, A.; Antoniou, V.; Ballhausen, R.; Lehmer, B. D.;
2017-01-01
We report the identification of a bright hard X-ray source dominating the M31 bulge above 25 keV from a simultaneous NuSTAR-Swift observation. We find that this source is the counterpart to Swift J0042.6+4112, which was previously detected in the Swift BAT All-Sky Hard X-Ray Survey. This Swift BAT source had been suggested to be the combined emission from a number of point sources; our new observations have identified a single X-ray source from 0.5 to 50 keV as the counterpart for the first time. In the 0.5-10 keV band, the source had been classified as an X-ray Binary candidate in various Chandra and XMM-Newton studies; however, since it was not clearly associated with Swift J0042.6+4112, the previous E is less than 10keVobservations did not generate much attention. This source has a spectrum with a soft X-ray excess (kT approximately equal to 0.2 keV) plus a hard spectrum with a power law of gamma approximately equal to 1 and a cutoff around 15-20 keV, typical of the spectral characteristics of accreting pulsars. Unfortunately, any potential pulsation was undetected in the NuSTAR data, possibly due to insufficient photon statistics. The existing deep HST (Hubble Space Telescope) images exclude high-mass (greater than 3 times the radius of the moon) donors at the location of this source. The best interpretation for the nature of this source is an X-ray pulsar with an intermediate-mass (less than 3 times the radius of the moon M) companion or a symbiotic X-ray binary. We discuss other possibilities in more detail.
Design and Performance of the GAMMA-400 Gamma-Ray Telescope for Dark Matter Searches
NASA Technical Reports Server (NTRS)
Galper, A. M.; Adriani, O.; Aptekar, R. L.; Arkhangelskaja, I. V.; Arkhangelskiy, A. I.; Boezio, M.; Bonvicini, V.; Boyarchuk, K. A.; Fradkin, M. I.; Gusakov, Yu V.;
2012-01-01
The GAMMA-400 gamma-ray telescope is designed to measure the fluxes of gamma-rays and cosmic-ray electrons (+) positrons, which can be produced by annihilation or decay of the dark matter particles, as well as to survey the celestial sphere in order to study point and extended sources of gamma-rays, measure energy spectra of Galactic and extragalactic diffuse gamma-ray emission, gamma-ray bursts, and gamma-ray emission from the Sun. GAMMA-400 covers the energy range from 100 MeV to 3000 GeV. Its angular resolution is approximately 0.01deg (E(sub gamma) greater than 100 GeV), the energy resolution approximately 1% (E(sub gamma) greater than 10 GeV), and the proton rejection factor approximately 10(exp 6). GAMMA-400 will be installed on the Russian space platform Navigator. The beginning of observations is planned for 2018.
Fourier removal of stripe artifacts in IRAS images
NASA Technical Reports Server (NTRS)
Van Buren, Dave
1987-01-01
By working in the Fourier plane, approximate removal of stripe artifacts in IRAS images can be effected. The image of interest is smoothed and subtracted from the original, giving the high-spatial-frequency part. This 'filtered' image is then clipped to remove point sources and then Fourier transformed. Subtracting the Fourier components contributing to the stripes in this image from the Fourier transform of the original and transforming back to the image plane yields substantial removal of the stripes.
2009-01-01
employs a set of reference targets such as asteroids that are relatively numer- ous, more or less uniformly distributed around the Sun, and relatively...point source-like. Just such a population exists—90 km-class asteroids . There are about 100 of these objects with relatively well-know orbits...These are main belt objects that are approximately evenly distributed around the sun. They are large enough to be quasi-spherical in nature, and as a
European Scientific Notes. Volume 35. Number 1.
1981-01-31
thermotropic polymers, primar- formed smectic phases. She also studied ily with aromatic polyesters. Dr. R.W. the orientation of liquid crystal ...booster synchrotron and Linac are switched studies of crystals where a very good off and the electrons are allowed to approximation to a point source...compounds in the temperature Cr in a MgO host crystal in magnetic range of 1 to 25 K and as a function of fields up to 2*S T at temperatures between
Gas Diffusion in Fluids Containing Bubbles
NASA Technical Reports Server (NTRS)
Zak, M.; Weinberg, M. C.
1982-01-01
Mathematical model describes movement of gases in fluid containing many bubbles. Model makes it possible to predict growth and shrink age of bubbles as function of time. New model overcomes complexities involved in analysis of varying conditions by making two simplifying assumptions. It treats bubbles as point sources, and it employs approximate expression for gas concentration gradient at liquid/bubble interface. In particular, it is expected to help in developing processes for production of high-quality optical glasses in space.
NASA Astrophysics Data System (ADS)
Bonhoff, H. A.; Petersson, B. A. T.
2010-08-01
For the characterization of structure-borne sound sources with multi-point or continuous interfaces, substantial simplifications and physical insight can be obtained by incorporating the concept of interface mobilities. The applicability of interface mobilities, however, relies upon the admissibility of neglecting the so-called cross-order terms. Hence, the objective of the present paper is to clarify the importance and significance of cross-order terms for the characterization of vibrational sources. From previous studies, four conditions have been identified for which the cross-order terms can become more influential. Such are non-circular interface geometries, structures with distinctively differing transfer paths as well as a suppression of the zero-order motion and cases where the contact forces are either in phase or out of phase. In a theoretical study, the former four conditions are investigated regarding the frequency range and magnitude of a possible strengthening of the cross-order terms. For an experimental analysis, two source-receiver installations are selected, suitably designed to obtain strong cross-order terms. The transmitted power and the source descriptors are predicted by the approximations of the interface mobility approach and compared with the complete calculations. Neglecting the cross-order terms can result in large misinterpretations at certain frequencies. On average, however, the cross-order terms are found to be insignificant and can be neglected with good approximation. The general applicability of interface mobilities for structure-borne sound source characterization and the description of the transmission process thereby is confirmed.
Retrospective cost-effectiveness analyses for polio vaccination in the United States.
Thompson, Kimberly M; Tebbens, Radboud J Duintjer
2006-12-01
The history of polio vaccination in the United States spans 50 years and includes different phases of the disease, multiple vaccines, and a sustained significant commitment of resources. We estimated cost-effectiveness ratios and assessed the net benefits of polio vaccination applicable at various points in time from the societal perspective and we discounted these back to appropriate points in time. We reconstructed vaccine price data from available sources and used these to retrospectively estimate the total costs of the U.S. historical polio vaccination strategies (all costs reported in year 2002 dollars). We estimate that the United States invested approximately US dollars 35 billion (1955 net present value, discount rate of 3%) in polio vaccines between 1955 and 2005 and will invest approximately US dollars 1.4 billion (1955 net present value, or US dollars 6.3 billion in 2006 net present value) between 2006 and 2015 assuming a policy of continued use of inactivated poliovirus vaccine (IPV) for routine vaccination. The historical and future investments translate into over 1.7 billion vaccinations that prevent approximately 1.1 million cases of paralytic polio and over 160,000 deaths (1955 net present values of approximately 480,000 cases and 73,000 deaths). Due to treatment cost savings, the investment implies net benefits of approximately US dollars 180 billion (1955 net present value), even without incorporating the intangible costs of suffering and death and of averted fear. Retrospectively, the U.S. investment in polio vaccination represents a highly valuable, cost-saving public health program. Observed changes in the cost-effectiveness ratio estimates over time suggest the need for living economic models for interventions that appropriately change with time. This article also demonstrates that estimates of cost-effectiveness ratios at any single time point may fail to adequately consider the context of the investment made to date and the importance of population and other dynamics, and shows the importance of dynamic modeling.
THE 31 DEG{sup 2} RELEASE OF THE STRIPE 82 X-RAY SURVEY: THE POINT SOURCE CATALOG
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaMassa, Stephanie M.; Urry, C. Megan; Ananna, Tonima
We release the next installment of the Stripe 82 X-ray survey point-source catalog, which currently covers 31.3 deg{sup 2} of the Sloan Digital Sky Survey (SDSS) Stripe 82 Legacy field. In total, 6181 unique X-ray sources are significantly detected with XMM-Newton (>5σ) and Chandra (>4.5σ). This catalog release includes data from XMM-Newton cycle AO 13, which approximately doubled the Stripe 82X survey area. The flux limits of the Stripe 82X survey are 8.7 × 10{sup −16} erg s{sup −1} cm{sup −2}, 4.7 × 10{sup −15} erg s{sup −1} cm{sup −2}, and 2.1 × 10{sup −15} erg s{sup −1} cm{sup −2} in the soft (0.5–2 keV), hardmore » (2–10 keV), and full bands (0.5–10 keV), respectively, with approximate half-area survey flux limits of 5.4 × 10{sup −15} erg s{sup −1} cm{sup −2}, 2.9 × 10{sup −14} erg s{sup −1} cm{sup −2}, and 1.7 × 10{sup −14} erg s{sup −1} cm{sup −2}. We matched the X-ray source lists to available multi-wavelength catalogs, including updated matches to the previous release of the Stripe 82X survey; 88% of the sample is matched to a multi-wavelength counterpart. Due to the wide area of Stripe 82X and rich ancillary multi-wavelength data, including coadded SDSS photometry, mid-infrared WISE coverage, near-infrared coverage from UKIDSS and VISTA Hemisphere Survey, ultraviolet coverage from GALEX, radio coverage from FIRST, and far-infrared coverage from Herschel, as well as existing ∼30% optical spectroscopic completeness, we are beginning to uncover rare objects, such as obscured high-luminosity active galactic nuclei at high-redshift. The Stripe 82X point source catalog is a valuable data set for constraining how this population grows and evolves, as well as for studying how they interact with the galaxies in which they live.« less
On determining dose rate constants spectroscopically.
Rodriguez, M; Rogers, D W O
2013-01-01
To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of (125)I and (103)Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated (125)I and (103)Pd sources. Spectra generated by 14 (125)I and 6 (103)Pd seeds were calculated in vacuo at 10 cm from the source in a 2.7 × 2.7 × 0.05 cm(3) voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the (125)I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for (103)Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were ≤0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. The ratio of the intensity of the 31 keV line relative to that of the main peak in (125)I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The (103)Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the (125)I and (103)Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
Effects of volcano topography on seismic broad-band waveforms
NASA Astrophysics Data System (ADS)
Neuberg, Jürgen; Pointer, Tim
2000-10-01
Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.
Adaptive Neuro-Fuzzy Modeling of UH-60A Pilot Vibration
NASA Technical Reports Server (NTRS)
Kottapalli, Sesi; Malki, Heidar A.; Langari, Reza
2003-01-01
Adaptive neuro-fuzzy relationships have been developed to model the UH-60A Black Hawk pilot floor vertical vibration. A 200 point database that approximates the entire UH-60A helicopter flight envelope is used for training and testing purposes. The NASA/Army Airloads Program flight test database was the source of the 200 point database. The present study is conducted in two parts. The first part involves level flight conditions and the second part involves the entire (200 point) database including maneuver conditions. The results show that a neuro-fuzzy model can successfully predict the pilot vibration. Also, it is found that the training phase of this neuro-fuzzy model takes only two or three iterations to converge for most cases. Thus, the proposed approach produces a potentially viable model for real-time implementation.
Lavoie, Dawn; Flocks, James G.; Kindinger, Jack G.; Sallenger, A.H.; Twichell, David C.
2010-01-01
The State of Louisiana requested emergency authorization on May 11, 2010, to perform spill mitigation work on the Chandeleur Islands and on all the barrier islands from Grand Terre Island eastward to Sandy Point to enhance the capability of the islands to reduce the movement of oil from the Deepwater Horizon oil spill to the marshes. The proposed action-building a barrier berm (essentially an artificial island fronting the existing barriers and inlets) seaward of the existing barrier islands and inlets-'restores' the protective function of the islands but does not alter the islands themselves. Building a barrier berm to protect the mainland wetlands from oil is a new strategy and depends on the timeliness of construction to be successful. Prioritizing areas to be bermed, focusing on those areas that are most vulnerable and where construction can be completed most rapidly, may increase chances for success. For example, it may be easier and more efficient to berm the narrow inlets of the coastal section to the west of the Mississippi River Delta rather than the large expanses of open water to the east of the delta in the southern parts of the Breton National Wildlife Refuge (NWR). This document provides information about the potential available sand resources and effects of berm construction on the existing barrier islands. The proposed project originally involved removing sediment from a linear source approximately 1 mile (1.6 km) gulfward of the barrier islands and placing it just seaward of the islands in shallow water (~2-m depth where possible) to form a continuous berm rising approximately 6 feet (~2 m) above sea level (North American Vertical Datum of 1988-NAVD88) with an ~110-yd (~100-m) width at water level and a slope of 25:1 to the seafloor. Discussions within the U.S. Geological Survey (USGS) and with others led to the determination that point-source locations, such as Hewes Point, the St. Bernard Shoals, and Ship Shoal, were more suitable 'borrow' locations because sand content is insufficient along a linear track offshore from most of Louisiana's barrier islands. Further, mining sediment near the toe of the barrier island platform or edge of actively eroding barrier islands could create pits in the seafloor that will capture nearshore sand, thereby enhancing island erosion, and focus incoming waves (for example, through refraction processes) that could yield hotspots of erosion. In the Breton NWR, the proposed berm would be continuous from just south of Hewes Point to Breton Island for approximately 100 km with the exception of several passages for vessel access. Proposed volume estimates by sources outside of the USGS suggest that the structure in the Breton NWR would contain approximately 56 million cubic yards (42.8 m3) of sandy material. In the west, the berm would require approximately 36 million cubic yards (27.5 m3) of sandy material because this area has less open water than the area to the east of the delta. The planned berm is intended to protect the islands and inland areas from oil and would be sacrificial; that is, it will rapidly erode through natural processes. It is not part of the coastal restoration plan long discussed in Louisiana to rebuild barrier islands for hurricane protection of mainland infrastructure and habitat.
Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S
2007-01-01
We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.
Volcanic Surface Deformation in Dominica From GPS Geodesy: Results From the 2007 NSF- REU Site
NASA Astrophysics Data System (ADS)
Murphy, R.; James, S.; Styron, R. H.; Turner, H. L.; Ashlock, A.; Cavness, C.; Collier, X.; Fauria, K.; Feinstein, R.; Staisch, L.; Williams, B.; Mattioli, G. S.; Jansma, P. E.; Cothren, J.
2007-12-01
GPS measurements have been collected on the island of Dominica in the Lesser Antilles between 2001 and 2007, with five month-long campaigns completed in June of each year supported in part by a NSF REU Site award for the past two years. All GPS data were collected using dual-frequency, code-phase receivers and geodetic-quality antenna, primarily choke rings. Three consecutive 24 hr observation days were normally obtained for each site. Precise station positions were estimated with GIPSY-OASISII using an absolute point positioning strategy and final, precise orbits, clocks, earth orientation parameters, and x-files. All position estimates were updated to ITRF05 and a revised Caribbean Euler pole was used to place our observations in a CAR-fixed frame. Time series were created to determine the velocity of each station. Forward and inverse elastic half-space models with planar (i.e. dike) and Mogi (i.e. point) sources were investigated. Inverse modeling was completed using a downhill simplex method of function minimization. Selected site velocities were used to create appropriate models for specific regions of Dominica, which correspond to known centers of pre-historic volcanic or recent shallow, seismic activity. Because of the current distribution of GPS sites with robust velocity estimates, we limit our models to possible magmatic activity in the northern, proximal to the volcanic centers of Morne Diablotins and Morne aux Diables, and southern, proximal to volcanic centers of Soufriere and Morne Plat Pays, regions of the island. Surface deformation data from the northernmost sites may be fit with the development of a several km-long dike trending approximately northeast- southwest. Activity in the southern volcanic centers is best modeled by an expanding point source at approximately 1 km depth.
Evidence for Intermediate Polars as the Origin of the Galactic Center Hard X-Ray Emission
NASA Technical Reports Server (NTRS)
Hailey, Charles J.; Mori, Kaya; Perez, Kerstin; Canipe, Alicia M.; Hong, Jaesub; Tomsick, John A.; Boggs, Steven E.; Christensen, Finn E.; Craig, William W.; Fornasini, Francesa;
2016-01-01
Recently, unresolved hard (20-40 keV) X-ray emission has been discovered within the central 10 pc of the Galaxy, possibly indicating a large population of intermediate polars (IPs). Chandra and XMM-Newton measurements in the surrounding approximately 50 pc imply a much lighter population of IPs with (M(sub WD)) approximately 0.5 solar mass. Here we use broadband NuSTAR observations of two IPs: TV Columbae, which has a fairly typical but widely varying reported mass of (M(sub WD)) approximately 0.5-1.0 solar mass, and IGR J17303-0601, with a heavy reported mass of (M(sub WD)) approximately 1.0-1.2 solar mass. We investigate how varying spectral models and observed energy ranges influences estimated white dwarf mass. Observations of the inner 10 pc can be accounted for by IPs with (M(sub WD) approximately 0.9 solar mass, consistent with that of the CV population in general and the X-ray observed field IPs in particular. The lower mass derived by Chandra and XMM-Newton appears to be an artifact of narrow energy-band fitting. To explain the (unresolved) central hard X-ray emission (CHXE) by IPs requires an X-ray (2-8 keV) luminosity function (XLF) extending down to at least 5 x 10(exp 31) per erg s. The CHXE XLF, if extended to the surrounding approximately 50 pc observed by Chandra and XMM-Newton, requires that at least approximately 20%-40% of the approximately 9000 point sources are IPs. If the XLF extends just a factor of a few lower in luminosity, then the vast majority of these sources are IPs. This is in contrast to recent observations of the Galactic ridge, where the bulk of the 2-8 keV emission is ascribed to non-magnetic CVs.
NASA Technical Reports Server (NTRS)
Hass, Neal; Mizukami, Masashi; Neal, Bradford A.; St. John, Clinton; Beil, Robert J.; Griffin, Timothy P.
1999-01-01
This paper presents pertinent results and assessment of propellant feed system leak detection as applied to the Linear Aerospike SR-71 Experiment (LASRE) program flown at the NASA Dryden Flight Research Center, Edwards, California. The LASRE was a flight test of an aerospike rocket engine using liquid oxygen and high-pressure gaseous hydrogen as propellants. The flight safety of the crew and the experiment demanded proven technologies and techniques that could detect leaks and assess the integrity of hazardous propellant feed systems. Point source detection and systematic detection were used. Point source detection was adequate for catching gross leakage from components of the propellant feed systems, but insufficient for clearing LASRE to levels of acceptability. Systematic detection, which used high-resolution instrumentation to evaluate the health of the system within a closed volume, provided a better means for assessing leak hazards. Oxygen sensors detected a leak rate of approximately 0.04 cubic inches per second of liquid oxygen. Pressure sensor data revealed speculated cryogenic boiloff through the fittings of the oxygen system, but location of the source(s) was indeterminable. Ultimately, LASRE was cancelled because leak detection techniques were unable to verify that oxygen levels could be maintained below flammability limits.
Safety of packaged water distribution limited by household recontamination in rural Cambodia.
Holman, Emily J; Brown, Joe
2014-06-01
Packaged water treatment schemes represent a growing model for providing safer water in low-income settings, yet post-distribution recontamination of treated water may limit this approach. This study evaluates drinking water quality and household water handling practices in a floating village in Tonlé Sap Lake, Cambodia, through a pilot cross-sectional study of 108 households, approximately half of which used packaged water as the main household drinking water source. We hypothesized that households purchasing drinking water from local packaged water treatment plants would have microbiologically improved drinking water at the point of consumption. We found no meaningful difference in microbiological drinking water quality between households using packaged, treated water and those collecting water from other sources, including untreated surface water, however. Households' water storage and handling practices and home hygiene may have contributed to recontamination of drinking water. Further measures to protect water quality at the point-of-use may be required even if water is treated and packaged in narrow-mouthed containers.
An atlas of H-alpha-emitting regions in M33: A systematic search for SS433 star candidates
NASA Technical Reports Server (NTRS)
Calzetti, Daniela; Kinney, Anne L.; Ford, Holland; Doggett, Jesse; Long, Knox S.
1995-01-01
We report finding charts and accurate positions for 432 compact H-alpha emitting regions in the Local Group galaxy M 33 (NGC 598), in an effort to isolate candidates for an SS433-like stellar system. The objects were extracted from narrow band images, centered in the rest-frame H-alpha (lambda 6563 A) and in the red continuum at 6100 A. The atlas is complete down to V approximately equal to 20 and includes 279 compact HII regions and 153 line emitting point-like sources. The point-like sources undoubtedly include a variety of objects: very small HII regions, early type stars with intense stellar winds, and Wolf-Rayet stars, but should also contain objects with the characteristics of SS433. This extensive survey of compact H-alpha regions in M 33 is a first step towards the identification of peculiar stellar systems like SS433 in external galaxies.
NASA Astrophysics Data System (ADS)
Li, Weiyao; Huang, Guanhua; Xiong, Yunwu
2016-04-01
The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.
X-ray Modeling of η Carinae & WR140 from SPH Simulations
NASA Astrophysics Data System (ADS)
Russell, Christopher M. P.; Corcoran, Michael F.; Okazaki, Atsuo T.; Madura, Thomas I.; Owocki, Stanley P.
2011-01-01
The colliding wind binary (CWB) systems η Carinae and WR140 provide unique laboratories for X-ray astrophysics. Their wind-wind collisions produce hard X-rays that have been monitored extensively by several X-ray telescopes, including RXTE. To interpret these RXTE X-ray light curves, we model the wind-wind collision using 3D smoothed particle hydrodynamics (SPH) simulations. Adiabatic simulations that account for the emission and absorption of X-rays from an assumed point source at the apex of the wind-collision shock cone by the distorted winds can closely match the observed 2-10keV RXTE light curves of both η Car and WR140. This point-source model can also explain the early recovery of η Car's X-ray light curve from the 2009.0 minimum by a factor of 2-4 reduction in the mass loss rate of η Car. Our more recent models relax the point-source approximation and account for the spatially extended emission along the wind-wind interaction shock front. For WR140, the computed X-ray light curve again matches the RXTE observations quite well. But for η Car, a hot, post-periastron bubble leads to an emission level that does not match the extended X-ray minimum observed by RXTE. Initial results from incorporating radiative cooling and radiatively-driven wind acceleration via a new anti-gravity approach into the SPH code are also discussed.
NASA Technical Reports Server (NTRS)
Reese, Erik; Mroczkowski, Tony; Menateau, Felipe; Hilton, Matt; Sievers, Jonathan; Aguirre, Paula; Appel, John William; Baker, Andrew J.; Bond, J. Richard; Das, Sudeep;
2011-01-01
We present follow-up observations with the Sunyaev-Zel'dovich Array (SZA) of optically-confirmed galaxy clusters found in the equatorial survey region of the Atacama Cosmology Telescope (ACT): ACT-CL J0022-0036, ACT-CL J2051+0057, and ACT-CL J2337+0016. ACT-CL J0022-0036 is a newly-discovered, massive ( approximately equals 10(exp 15) Solar M), high-redshift (z = 0.81) cluster revealed by ACT through the Sunyaev-Zeldovich effect (SZE). Deep, targeted observations with the SZA allow us to probe a broader range of cluster spatial scales, better disentangle cluster decrements from radio point source emission, and derive more robust integrated SZE flux and mass estimates than we can with ACT data alone. For the two clusters we detect with the SZA we compute integrated SZE signal and derive masses from the SZA data only. ACT-CL J2337+0016, also known as Abell 2631, has archival Chandra data that allow an additional X-ray-based mass estimate. Optical richness is also used to estimate cluster masses and shows good agreement with the SZE and X-ray-based estimates. Based on the point sources detected by the SZA in these three cluster fields and an extrapolation to ACT's frequency, we estimate that point sources could be contaminating the SZE decrement at the approx < 20% level for some fraction of clusters.
A free geometry model-independent neural eye-gaze tracking system
2012-01-01
Background Eye Gaze Tracking Systems (EGTSs) estimate the Point Of Gaze (POG) of a user. In diagnostic applications EGTSs are used to study oculomotor characteristics and abnormalities, whereas in interactive applications EGTSs are proposed as input devices for human computer interfaces (HCI), e.g. to move a cursor on the screen when mouse control is not possible, such as in the case of assistive devices for people suffering from locked-in syndrome. If the user’s head remains still and the cornea rotates around its fixed centre, the pupil follows the eye in the images captured from one or more cameras, whereas the outer corneal reflection generated by an IR light source, i.e. glint, can be assumed as a fixed reference point. According to the so-called pupil centre corneal reflection method (PCCR), the POG can be thus estimated from the pupil-glint vector. Methods A new model-independent EGTS based on the PCCR is proposed. The mapping function based on artificial neural networks allows to avoid any specific model assumption and approximation either for the user’s eye physiology or for the system initial setup admitting a free geometry positioning for the user and the system components. The robustness of the proposed EGTS is proven by assessing its accuracy when tested on real data coming from: i) different healthy users; ii) different geometric settings of the camera and the light sources; iii) different protocols based on the observation of points on a calibration grid and halfway points of a test grid. Results The achieved accuracy is approximately 0.49°, 0.41°, and 0.62° for respectively the horizontal, vertical and radial error of the POG. Conclusions The results prove the validity of the proposed approach as the proposed system performs better than EGTSs designed for HCI which, even if equipped with superior hardware, show accuracy values in the range 0.6°-1°. PMID:23158726
Evaluation of ionic contribution to the toxicity of a coal-mine effluent using Ceriodaphnia dubia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, A.J.; Cherry, D.S.; Zipper, C.E.
2005-08-01
The United States Environmental Protection Agency has defined national in-stream water-quality criteria (WQC) for 157 pollutants. No WQC to protect aquatic life exist for total dissolved solids (TDS). Some water-treatment processes (e.g., pH modifications) discharge wastewaters of potentially adverse TDS into freshwater systems. Strong correlations between specific conductivity, a TDS surrogate, and several biotic indices in a previous study suggested that TDS caused by a coal-mine effluent was the primary stressor. Further acute and chronic testing in the current study with Ceriodaphnia dubia in laboratory-manipulated media indicated that the majority of the effluent toxicity could be attributed to the mostmore » abundant ions in the discharge, sodium (1952 mg/L) and/or sulfate (3672 mg/L), although the hardness of the effluent (792 43 mg/L as CaCO{sub 3}) ameliorated some toxicity. Based on laboratory testing of several effluent-mimicking media, sodium- and sulfate-dominated TDS was acutely toxic at approximately 7000 {mu} S/cm (5143 mg TDS/L), and chronic toxicity occurred at approximately 3200 {mu} S/cm (2331 mg TDS/L). At a lower hardness (88 mg/L as CaCO{sub 3}), acute and chronic toxicity end-points were decreased to approximately 5000 {mu} S/cm (3663 mg TDS/L) and approximately 2000 {mu} S/cm (1443 mg TDS/L), respectively. Point-source discharges causing in-stream TDS concentrations to exceed these levels may risk impairment to aquatic life.« less
Investigating the generation of Love waves in secondary microseisms using 3D numerical simulations
NASA Astrophysics Data System (ADS)
Wenk, Stefan; Hadziioannou, Celine; Pelties, Christian; Igel, Heiner
2014-05-01
Longuet-Higgins (1950) proposed that secondary microseismic noise can be attributed to oceanic disturbances by surface gravity wave interference causing non-linear, second-order pressure perturbations at the ocean bottom. As a first approximation, this source mechanism can be considered as a force acting normal to the ocean bottom. In an isotropic, layered, elastic Earth model with plain interfaces, vertical forces generate P-SV motions in the vertical plane of source and receiver. In turn, only Rayleigh waves are excited at the free surface. However, several authors report on significant Love wave contributions in the secondary microseismic frequency band of real data measurements. The reason is still insufficiently analysed and several hypothesis are under debate: - The source mechanism has strongest influence on the excitation of shear motions, whereas the source direction dominates the effect of Love wave generation in case of point force sources. Darbyshire and Okeke (1969) proposed the topographic coupling effect of pressure loads acting on a sloping sea-floor to generate the shear tractions required for Love wave excitation. - Rayleigh waves can be converted into Love waves by scattering. Therefore, geometric scattering at topographic features or internal scattering by heterogeneous material distributions can cause Love wave generation. - Oceanic disturbances act on large regions of the ocean bottom, and extended sources have to be considered. In combination with topographic coupling and internal scattering, the extent of the source region and the timing of an extended source should effect Love wave excitation. We try to elaborate the contribution of different source mechanisms and scattering effects on Love to Rayleigh wave energy ratios by 3D numerical simulations. In particular, we estimate the amount of Love wave energy generated by point and extended sources acting on the free surface. Simulated point forces are modified in their incident angle, whereas extended sources are adapted in their spatial extent, magnitude and timing. Further, the effect of variations in the correlation length and perturbation magnitude of a random free surface topography as well as an internal random material distribution are studied.
Eames, I; Small, I; Frampton, A; Cottenden, A M
2003-01-01
The spread of fluid from a localized source on to a flat fibrous sheet is studied. The sheet is inclined at an angle, alpha, to the horizontal, and the areal flux of the fluid released is Qa. A new experimental study is described where the dimensions of the wetted region are measured as a function of time t, Qa and alpha (>0). The down-slope length, Y, grows according to Y approximately (Qa t)(2/3) (sin alpha)(1/3); for high discharge rates and low angles of inclination, the cross-slope width, X, grows as approximately (Qa t)(1/2), while for low discharge rates or high angles of inclination, the cross-slope transport is dominated by infiltration and X approximately 2(2Ks psi* t)(1/2), where Ks is the saturated permeability and psi* is the characteristic value of capillary pressure. A scaling analysis of the underlying non-linear advection diffusion equation describing the infiltration process confirms many of the salient features of the flow observed. Good agreement is observed between the collapse of the numerical solutions and experimental results. The broader implications of these results for incontinence bed-pad research are briefly discussed.
NASA Astrophysics Data System (ADS)
Laura, Jason; Skinner, James A.; Hunter, Marc A.
2017-08-01
In this paper we present the Large Crater Clustering (LCC) tool set, an ArcGIS plugin that supports the quantitative approximation of a primary impact location from user-identified locations of possible secondary impact craters or the long-axes of clustered secondary craters. The identification of primary impact craters directly supports planetary geologic mapping and topical science studies where the chronostratigraphic age of some geologic units may be known, but more distant features have questionable geologic ages. Previous works (e.g., McEwen et al., 2005; Dundas and McEwen, 2007) have shown that the source of secondary impact craters can be estimated from secondary impact craters. This work adapts those methods into a statistically robust tool set. We describe the four individual tools within the LCC tool set to support: (1) processing individually digitized point observations (craters), (2) estimating the directional distribution of a clustered set of craters, back projecting the potential flight paths (crater clusters or linearly approximated catenae or lineaments), (3) intersecting projected paths, and (4) intersecting back-projected trajectories to approximate the local of potential source primary craters. We present two case studies using secondary impact features mapped in two regions of Mars. We demonstrate that the tool is able to quantitatively identify primary impacts and supports the improved qualitative interpretation of potential secondary crater flight trajectories.
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, S.; Brietzke, G.; Igel, H.; Larmat, C.; Fichtner, A.; Johnson, P. A.; Huang, L.
2008-12-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the source point and other information might be inferred. In this study, the backward propagation is performed numerically using a spectral element code. We investigate the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, location of asperities, rupture velocity etc.). We use synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice- rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of relaxing the ignorance to prior source information (e.g., origin time, hypocenter, fault location, etc.) on the results of the time reversal process.
A numerical experiment on light pollution from distant sources
NASA Astrophysics Data System (ADS)
Kocifaj, M.
2011-08-01
To predict the light pollution of the night-time sky realistically over any location or measuring point on the ground presents quite a difficult calculation task. Light pollution of the local atmosphere is caused by stray light, light loss or reflection of artificially illuminated ground objects or surfaces such as streets, advertisement boards or building interiors. Thus it depends on the size, shape, spatial distribution, radiative pattern and spectral characteristics of many neighbouring light sources. The actual state of the atmospheric environment and the orography of the surrounding terrain are also relevant. All of these factors together influence the spectral sky radiance/luminance in a complex manner. Knowledge of the directional behaviour of light pollution is especially important for the correct interpretation of astronomical observations. From a mathematical point of view, the light noise or veil luminance of a specific sky element is given by a superposition of scattered light beams. Theoretical models that simulate light pollution typically take into account all ground-based light sources, thus imposing great requirements on CPU and MEM. As shown in this paper, a contribution of distant sources to the light pollution might be essential under specific conditions of low turbidity and/or Garstang-like radiative patterns. To evaluate the convergence of the theoretical model, numerical experiments are made for different light sources, spectral bands and atmospheric conditions. It is shown that in the worst case the integration limit is approximately 100 km, but it can be significantly shortened for light sources with cosine-like radiative patterns.
Exploring the Variability of the Fermi LAT Blazar Population
NASA Astrophysics Data System (ADS)
Macomb, Daryl J.; Shrader, C. R.
2014-01-01
The flux variability of the approximately 2000 point sources cataloged by the Fermi Gamma-Ray Space Telescope provide important clues to population characteristics. This is particularly true of the more than 1100 source that are likely AGN. By characterizing the intrinsic flux variability and distinguishing this variability from flaring behavior, we can better address questions of flare amplitudes, durations, recurrence times, and temporal profiles. A better understanding of the responsible physical environments, such as the scale and location of jet structures responsible for the high-energy emission, may emerge from such studies. Assessing these characteristics as a function of blazar sub-class is a further goal in order to address questions about the fundamentals of blazar AGN physics. Here we report on progress made in categorizing blazar flare behavior, and correlate these behaviors with blazar sub-type and other source parameters.
Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment
NASA Astrophysics Data System (ADS)
Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.
2007-05-01
A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.
Limits on Arcminute Scale Cosmic Microwave Background Anisotropy with the BIMA Array
NASA Technical Reports Server (NTRS)
Holzapfel, W. L.; Carlstrom, J. E.; Grego, L.; Holder, G. P.; Joy, M. K.; Reese, E. D.; Rose, M. Franklin (Technical Monitor)
2000-01-01
We have used the Berkeley-Illinois-Maryland-Association (BIMA) millimeter array outfitted with sensitive cm-wave receivers to search for Cosmic Microwave Background (CMB) anisotropies on arcminute scales. The interferometer was placed in a compact configuration which produces high brightness sensitivity, while providing discrimination against point sources. Operating at a frequency of 28.5 GHz, the FWHM primary beam of the instrument is 6.6 arcminutes. We have made sensitive images of seven fields, five of which where chosen specifically to have low IR dust contrast and be free of bright radio sources. Additional observations with the Owens Valley Radio Observatory (OVRO) millimeter array were used to assist in the location and removal of radio point sources. Applying a Bayesian analysis to the raw visibility data, we place limits on CMB anisotropy flat-band power Q_flat = 5.6 (+3.0, -5.6) uK and Q_flat < 14.1 uK at 68% and 95% confidence. The sensitivity of this experiment to flat band power peaks at a multipole of l = 5470, which corresponds to an angular scale of approximately 2 arcminutes The most likely value of Q_flat is similar to the level of the expected secondary anisotropies.
NASA Technical Reports Server (NTRS)
Jia, Jianjun; Ptak, Andrew; Heckman, Timothy M.; Braito, Valentina; Reeves, James
2012-01-01
We present a Chandra observation of IRAS 19254-7245, a nearby ultraluminous infrared galaxy also known as the Superantennae. The high spatial resolution of Chandra allows us to disentangle for the first time the diffuse starburst (SB) emission from the embedded Compton-thick active galactic nucleus (AGN) in the southern nucleus. No AGN activity is detected in the northern nucleus. The 2-10 keV spectrum of the AGN emission is fitted by a flat power law (TAU = 1.3) and an He-like Fe Kalpha line with equivalent width 1.5 keV, consistent with previous observations. The Fe K line profile could be resolved as a blend of a neutral 6.4 keV line and an ionized 6.7 keV (He-like) or 6.9 keV (H-like) line. Variability of the neutral line is detected compared with the previous XMM-Newton and Suzaku observations, demonstrating the compact size of the iron line emission. The spectrum of the galaxy-scale extended emission excluding the AGN and other bright point sources is fitted with a thermal component with a best-fit kT of approximately 0.8 keV. The 2-10 keV luminosity of the extended emission is about one order of magnitude lower than that of the AGN. The basic physical and structural properties of the extended emission are fully consistent with a galactic wind being driven by the SB. A candidate ultraluminous X-ray source is detected 8 south of the southern nucleus. The 0.3 - 10 keV luminosity of this off-nuclear point source is approximately 6 x 10(exp 40) erg per second if the emission is isotropic and the source is associated with the Superantennae.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
The GMRT 150 MHz all-sky radio survey. First alternative data release TGSS ADR1
NASA Astrophysics Data System (ADS)
Intema, H. T.; Jagannathan, P.; Mooley, K. P.; Frail, D. A.
2017-02-01
We present the first full release of a survey of the 150 MHz radio sky, observed with the Giant Metrewave Radio Telescope (GMRT) between April 2010 and March 2012 as part of the TIFR GMRT Sky Survey (TGSS) project. Aimed at producing a reliable compact source survey, our automated data reduction pipeline efficiently processed more than 2000 h of observations with minimal human interaction. Through application of innovative techniques such as image-based flagging, direction-dependent calibration of ionospheric phase errors, correcting for systematic offsets in antenna pointing, and improving the primary beam model, we created good quality images for over 95 percent of the 5336 pointings. Our data release covers 36 900 deg2 (or 3.6 π steradians) of the sky between -53° and +90° declination (Dec), which is 90 percent of the total sky. The majority of pointing images have a noise level below 5 mJy beam-1 with an approximate resolution of 25''×25'' (or 25''×25''/ cos(Dec-19°) for pointings south of 19° declination). We have produced a catalog of 0.62 Million radio sources derived from an initial, high reliability source extraction at the seven sigma level. For the bulk of the survey, the measured overall astrometric accuracy is better than two arcseconds in right ascension and declination, while the flux density accuracy is estimated at approximately ten percent. Within the scope of the TGSS alternative data release (TGSS ADR) project, the source catalog, as well as 5336 mosaic images (5°×5°) and an image cutout service, are made publicly available at the CDS as a service to the astronomical community. Next to enabling a wide range of different scientific investigations, we anticipate that these survey products will provide a solid reference for various new low-frequency radio aperture array telescopes (LOFAR, LWA, MWA, SKA-low), and can play an important role in characterizing the epoch-of-reionisation (EoR) foreground. The TGSS ADR project aims at continuously improving the quality of the survey data products. Near-future improvements include replacement of bright source snapshot images with archival targeted observations, using new observations to fill the holes in sky coverage and replace very poor quality observational data, and an improved flux calibration strategy for less severely affected observational data. Full Table 3 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/598/A78
NASA Astrophysics Data System (ADS)
Keiffer, Richard; Novarini, Jorge; Scharstein, Robert
2002-11-01
In the standard development of the small wave-height approximation (SWHA) perturbation theory for scattering from moving rough surfaces [e.g., E. Y. Harper and F. M. Labianca, J. Acoust. Soc. Am. 58, 349-364 (1975)] the necessity for any sort of frozen surface approximation is avoided by the replacement of the rough boundary by a flat (and static) boundary. In this paper, this seemingly fortuitous byproduct of the small wave-height approximation is examined and found to fail to fully agree with an analysis based on the kinematics of the problem. Specifically, the first-order correction term from standard perturbation approach predicts a scattered amplitude that depends on the source frequency, whereas the kinematics of the problem point to a scattered amplitude that depends on the scattered frequency. It is shown that a perturbation approach in which an explicit frozen surface approximation is made before the SWHA is invoked predicts (first-order) scattered amplitudes that are in agreement with the kinematic analysis. [Work supported by ONR/NRL (PE 61153N-32) and by grants of computer time DoD HPC Shared Resource Center at Stennis Space Center, MS.
NASA Astrophysics Data System (ADS)
Causse, Mathieu; Cultrera, Giovanna; Herrero, André; Courboulex, Françoise; Schiappapietra, Erika; Moreau, Ludovic
2017-04-01
On May 29, 2012 occurred a Mw 5.9 earthquake in the Emilia-Romagna region (Po Plain) on a thrust fault system. This shock, as well as hundreds of aftershocks, were recorded by 10 strong motion stations located less than 10 km away from the rupture plane, with 4 stations located within the surface rupture projection. The Po Plain is a very large EW trending syntectonic alluvial basin, delimited by the Alps and Apennines chains to the North and South. The Plio-Quaternary sedimentary sequence filling the Po Plain is characterized by an uneven thickness, ranging from several thousands of meters to a few tens of meters. This particular context results especially in a resonance basin below 1 Hz and strong surface waves, which makes it particularly difficult to model wave propagation and hence to obtain robust images of the rupture propagation. This study proposes to take advantage of the large set of recorded aftershocks, considered as point sources, to model wave propagation. Due to the heterogeneous distribution of the aftershocks on the fault plane, an interpolation technique is proposed to compute an approximation of the Green's function between each fault point and each strong motion station in the frequency range [0.2-1Hz]. We then use a Bayesian inversion technique (Monte Carlo Markov Chain algorithm) to obtain images of the rupture propagation from the strong motion data. We propose to retrieve the slip distribution by inverting the final slip value at some control points, which are allowed to move on the fault plane, and by interpolating the slip value between these points. We show that the use of 5 control points to describe the slip, coupled with the hypothesis of spatially constant rupture velocity and rise-time (that is 18 free source parameters), results in a good level of fit with the data. This indicates that despite their complexity, the strong motion data can be properly modeled up to 1 Hz using a relatively simple rupture. The inversion results also reveal that the rupture propagated slowly, at a speed of about 45% of the shear wave velocity.
Note: Precise radial distribution of charged particles in a magnetic guiding field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backe, H., E-mail: backe@kph.uni-mainz.de
2015-07-15
Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J. W.
2012-04-01
Low frequency seismic signals are one class of volcano seismic earthquakes that have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements. Amongst others, Neuberg et al. (2006) proposed a conceptual model for the trigger of low frequency events at Montserrat involving the brittle failure of magma in the glass transition in response to high shear stresses during the upwards movement of magma in the volcanic edifice. For this study, synthetic seismograms were generated following the proposed concept of Neuberg et al. (2006) by using an extended source modelled as an octagonal arrangement of double couples approximating a circular ringfault. For comparison, synthetic seismograms were generated using single forces only. For both scenarios, synthetic seismograms were generated using a seismic station distribution as encountered on Soufriere Hills Volcano, Montserrat. To gain a better quantitative understanding of the driving forces of low frequency events, inversions for the physical source mechanisms have become increasingly common. Therefore, we perform moment tensor inversions (Dreger, 2003) using the synthetic data as well as a chosen set of seismograms recorded on Soufriere Hills Volcano. The inversions are carried out under the (wrong) assumption to have an underlying point source rather than an extended source as the trigger mechanism of the low frequency seismic events. We will discuss differences between inversion results, and how to interpret the moment tensor components (double couple, isotropic, or CLVD), which were based on a point source, in terms of an extended source.
Balanced Central Schemes for the Shallow Water Equations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron
2004-01-01
We present a two-dimensional, well-balanced, central-upwind scheme for approximating solutions of the shallow water equations in the presence of a stationary bottom topography on triangular meshes. Our starting point is the recent central scheme of Kurganov and Petrova (KP) for approximating solutions of conservation laws on triangular meshes. In order to extend this scheme from systems of conservation laws to systems of balance laws one has to find an appropriate discretization of the source terms. We first show that for general triangulations there is no discretization of the source terms that corresponds to a well-balanced form of the KP scheme. We then derive a new variant of a central scheme that can be balanced on triangular meshes. We note in passing that it is straightforward to extend the KP scheme to general unstructured conformal meshes. This extension allows us to recover our previous well-balanced scheme on Cartesian grids. We conclude with several simulations, verifying the second-order accuracy of our scheme as well as its well-balanced properties.
Experimental and Analytical Studies of Shielding Concepts for Point Sources and Jet Noises.
NASA Astrophysics Data System (ADS)
Wong, Raymond Lee Man
This analytical and experimental study explores concepts for jet noise shielding. Model experiments centre on solid planar shields, simulating engine-over-wing installations, and 'sugar scoop' shields. Tradeoff on effective shielding length is set by interference 'edge noise' as the shield trailing edge approaches the spreading jet. Edge noise is minimized by (i) hyperbolic cutouts which trim off the portions of most intense interference between the jet flow and the barrier and (ii) hybrid shields--a thermal refractive extension (a flame); for (ii) the tradeoff is combustion noise. In general, shielding attenuation increases steadily with frequency, following low frequency enhancement by edge noise. Although broadband attenuation is typically only several dB, the reduction of the subjectively weighted perceived noise levels is higher. In addition, calculated ground contours of peak PN dB show a substantial contraction due to shielding: this reaches 66% for one of the 'sugar scoop' shields for the 90 PN dB contour. The experiments are complemented by analytical predictions. They are divided into an engineering scheme for jet noise shielding and more rigorous analysis for point source shielding. The former approach combines point source shielding with a suitable jet source distribution. The results are synthesized into a predictive algorithm for jet noise shielding: the jet is modelled as a line distribution of incoherent sources with narrow band frequency (TURN)(axial distance)('-1). The predictive version agrees well with experiment (1 to 1.5 dB) up to moderate frequencies. The insertion loss deduced from the point source measurements for semi-infinite as well as finite rectangular shields agrees rather well with theoretical calculation based on the exact half plane solution and the superposition of asymptotic closed-form solutions. An approximate theory, the Maggi-Rubinowicz line integral, is found to yield reasonable predictions for thin barriers including cutouts if a certain correction is applied. The more exact integral equation approach (solved numerically) is applied to a more demanding geometry: a half round sugar scoop shield. It is found that the solutions of integral equation derived from Helmholtz formula in normal derivative form show satisfactory agreement with measurements.
NASA Astrophysics Data System (ADS)
Denolle, M.; Dunham, E. M.; Prieto, G.; Beroza, G. C.
2013-05-01
There is no clearer example of the increase in hazard due to prolonged and amplified shaking in sedimentary, than the case of Mexico City in the 1985 Michoacan earthquake. It is critically important to identify what other cities might be susceptible to similar basin amplification effects. Physics-based simulations in 3D crustal structure can be used to model and anticipate those effects, but they rely on our knowledge of the complexity of the medium. We propose a parallel approach to validate ground motion simulations using the ambient seismic field. We compute the Earth's impulse response combining the ambient seismic field and coda-wave enforcing causality and symmetry constraints. We correct the surface impulse responses to account for the source depth, mechanism and duration using a 1D approximation of the local surface-wave excitation. We call the new responses virtual earthquakes. We validate the ground motion predicted from the virtual earthquakes against moderate earthquakes in southern California. We then combine temporary seismic stations on the southern San Andreas Fault and extend the point source approximation of the Virtual Earthquake Approach to model finite kinematic ruptures. We confirm the coupling between source directivity and amplification in downtown Los Angeles seen in simulations.
Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes
NASA Astrophysics Data System (ADS)
Yamada, M.; Mori, J. J.
2009-12-01
Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.
Mapping the spatio-temporal risk of lead exposure in apex species for more effective mitigation
Mateo-Tomás, Patricia; Olea, Pedro P.; Jiménez-Moreno, María; Camarero, Pablo R.; Sánchez-Barbudo, Inés S.; Rodríguez Martín-Doimeadios, Rosa C.; Mateo, Rafael
2016-01-01
Effective mitigation of the risks posed by environmental contaminants for ecosystem integrity and human health requires knowing their sources and spatio-temporal distribution. We analysed the exposure to lead (Pb) in griffon vulture Gyps fulvus—an apex species valuable as biomonitoring sentinel. We determined vultures' lead exposure and its main sources by combining isotope signatures and modelling analyses of 691 bird blood samples collected over 5 years. We made yearlong spatially explicit predictions of the species risk of lead exposure. Our results highlight elevated lead exposure of griffon vultures (i.e. 44.9% of the studied population, approximately 15% of the European, showed lead blood levels more than 200 ng ml−1) partly owing to environmental lead (e.g. geological sources). These exposures to environmental lead of geological sources increased in those vultures exposed to point sources (e.g. lead-based ammunition). These spatial models and pollutant risk maps are powerful tools that identify areas of wildlife exposure to potentially harmful sources of lead that could affect ecosystem and human health. PMID:27466455
X-Ray Diffraction Wafer Mapping Method for Rhombohedral Super-Hetero-Epitaxy
NASA Technical Reports Server (NTRS)
Park, Yoonjoon; Choi, Sang Hyouk; King, Glen C.; Elliott, James R.; Dimarcantonio, Albert L.
2010-01-01
A new X-ray diffraction (XRD) method is provided to acquire XY mapping of the distribution of single crystals, poly-crystals, and twin defects across an entire wafer of rhombohedral super-hetero-epitaxial semiconductor material. In one embodiment, the method is performed with a point or line X-ray source with an X-ray incidence angle approximating a normal angle close to 90 deg, and in which the beam mask is preferably replaced with a crossed slit. While the wafer moves in the X and Y direction, a narrowly defined X-ray source illuminates the sample and the diffracted X-ray beam is monitored by the detector at a predefined angle. Preferably, the untilted, asymmetric scans are of {440} peaks, for twin defect characterization.
NASA Technical Reports Server (NTRS)
Fadel, G. M.
1991-01-01
The point exponential approximation method was introduced by Fadel et al. (Fadel, 1990), and tested on structural optimization problems with stress and displacement constraints. The reports in earlier papers were promising, and the method, which consists of correcting Taylor series approximations using previous design history, is tested in this paper on optimization problems with frequency constraints. The aim of the research is to verify the robustness and speed of convergence of the two point exponential approximation method when highly non-linear constraints are used.
Nguyen, Viet Tung; Gin, Karina Yew-Hoong; Reinhard, Martin; Liu, Changhui
2012-01-01
A study was carried out to characterize the occurrence, sources and sinks of perfluorochemicals (PFCs) in the Marina Catchment and Reservoir, Singapore. Salinity depth profiles indicated the reservoir was stratified with lower layers consisting of sea water (salinity ranging from 32 to 35 g L(-1)) and a brackish surface layer containing approximately 14-65% seawater. The PFC mixture detected in catchment waters contained perfluoroalkyl carboxylates (PFCAs), particularly perfluorooctanoate (PFOA), perfluorohexanoate (PFHpA), perfluorooctane sulfonate (PFOS) and PFC transformation products. PFC concentrations in storm runoff were generally higher than those in dry weather flow of canals and rivers. PFC concentration profiles measured during storm events indicated 'first flush' behavior, probably because storm water is leaching PFC compounds from non-point sources present in the catchment area. Storm runoff carries high concentrations of suspended solids (SS), which suggests that PFC transport is via SS. In Marina Bay, PFCs are deposited in the sediments along with the SS. In sediments, the total PFC concentration was 4,700 ng kg(-1), approximately 200 times higher than in the bottom water layers. Total perfluoroalkyl sulfonates (PFSAs), particularly PFOS and 6:2 fluoro telomer sulfonate (6:2 FtS) were dominant PFCs in the sediments. PFC sorption by sediments varied with perfluorocarbon chain length, type of functional group and sediment characteristics. A first approximation analysis based on SS transport suggested that the annual PFC input into the reservoir was approximately 35 ± 12 kg y(-1). Contributions of SS, dry weather flow of river/canals, and rainfall were approximately 70, 25 and 5%, respectively. This information will be useful for improving strategies to protect the reservoir from PFC contamination.
NASA Astrophysics Data System (ADS)
Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.
2017-12-01
During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog and its supplement, this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
NASA Astrophysics Data System (ADS)
Rau, U.; Bhatnagar, S.; Owen, F. N.
2016-11-01
Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1-2 GHz)) and 46-pointing mosaic (D-array, C-Band (4-8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μJy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
Classical electromagnetic fields from quantum sources in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Holliday, Robert; McCarty, Ryan; Peroutka, Balthazar; Tuchin, Kirill
2017-01-01
Electromagnetic fields are generated in high energy nuclear collisions by spectator valence protons. These fields are traditionally computed by integrating the Maxwell equations with point sources. One might expect that such an approach is valid at distances much larger than the proton size and thus such a classical approach should work well for almost the entire interaction region in the case of heavy nuclei. We argue that, in fact, the contrary is true: due to the quantum diffusion of the proton wave function, the classical approximation breaks down at distances of the order of the system size. We compute the electromagnetic field created by a charged particle described initially as a Gaussian wave packet of width 1 fm and evolving in vacuum according to the Klein-Gordon equation. We completely neglect the medium effects. We show that the dynamics, magnitude and even sign of the electromagnetic field created by classical and quantum sources are different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duchkov, A. A., E-mail: DuchkovAA@ipgg.sbras.ru; Novosibirsk State University, Novosibirsk, 630090; Stefanov, Yu. P., E-mail: stefanov@ispms.tsc.ru
2015-10-27
We have developed and illustrated an approach for geomechanic modeling of elastic wave generation (microsiesmic event occurrence) during incremental fracture growth. We then derived properties of effective point seismic sources (radiation patterns) approximating obtained wavefields. These results establish connection between geomechanic models of hydraulic fracturing and microseismic monitoring. Thus, the results of the moment tensor inversion of microseismic data can be related to different geomechanic scenarios of hydraulic fracture growth. In future, the results can be used for calibrating hydrofrac models. We carried out a series of numerical simulations and made some observations about wave generation during fracture growth. Inmore » particular when the growing fracture hits pre-existing crack then it generates much stronger microseismic event compared to fracture growth in homogeneous medium (radiation pattern is very close to the theoretical dipole-type source mechanism)« less
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
Sources and methods to reconstruct past masting patterns in European oak species.
Szabó, Péter
2012-01-01
The irregular occurrence of good seed years in forest trees is known in many parts of the world. Mast year frequency in the past few decades can be examined through field observational studies; however, masting patterns in the more distant past are equally important in gaining a better understanding of long-term forest ecology. Past masting patterns can be studied through the examination of historical written sources. These pose considerable challenges, because data in them were usually not recorded with the aim of providing information about masting. Several studies examined masting in the deeper past, however, authors hardly ever considered the methodological implications of using and combining various source types. This paper provides a critical overview of the types of archival written that are available for the reconstruction of past masting patterns for European oak species and proposes a method to unify and evaluate different types of data. Available sources cover approximately eight centuries and can be put into two basic categories: direct observations on the amount of acorns and references to sums of money received in exchange for access to acorns. Because archival sources are highly different in origin and quality, the optimal solution for creating databases for past masting data is a three-point scale: zero mast, moderate mast, good mast. When larger amounts of data are available in a unified three-point-scale database, they can be used to test hypotheses about past masting frequencies, the driving forces of masting or regional masting patterns.
Statistical Limits to Super Resolution
NASA Astrophysics Data System (ADS)
Lucy, L. B.
1992-08-01
The limits imposed by photon statistics on the degree to which Rayleigh's resolution limit for diffraction-limited images can be surpassed by applying image restoration techniques are investigated. An approximate statistical theory is given for the number of detected photons required in the image of an unresolved pair of equal point sources in order that its information content allows in principle resolution by restoration. This theory is confirmed by numerical restoration experiments on synthetic images, and quantitative limits are presented for restoration of diffraction-limited images formed by slit and circular apertures.
Advanced Acoustic Model Technical Reference and User Manual
2010-09-01
loss (point source). Aatm = ANSI/ ISO atmospheric absorption standard.28 Agrd = Ground reflection and attenuation losses, caused by the ground and the...433.2 1,021.8 25000 788 429.6 1,017.6 26000 754 426.1 1,013.4 27000 722 422.5 1,009.2 28000 691 418.9 1,004.9 29000 660 415.4 1,000.6 30000 631...ANSI/ ISO standard.28 Examples of the weather effects are described here. Consider a situation with winds approximately blowing from NW to SE. Figure 2
NASA Technical Reports Server (NTRS)
Alexander, D. M.; Stern, D.; DelMoro, A.; Lansbury, G. B.; Assef, R. J.; Aird, J.; Ajello, M.; Ballantyne, D. R.; Bauer, F. E.; Boggs, S. E.;
2013-01-01
We report on the first 10 identifications of sources serendipitously detected by the Nuclear Spectroscopic Telescope Array (NuSTAR) to provide the first sensitive census of the cosmic X-ray background source population at approximately greater than 10 keV. We find that these NuSTAR-detected sources are approximately 100 times fainter than those previously detected at approximately greater than 10 keV and have a broad range in redshift and luminosity (z = 0.020-2.923 and L(sub 10-40 keV) approximately equals 4 × 10(exp 41) - 5 × 10(exp 45) erg per second; the median redshift and luminosity are z approximately equal to 0.7 and L(sub 10-40 keV) approximately equal to 3 × 10(exp 44) erg per second, respectively. We characterize these sources on the basis of broad-band approximately equal to 0.5 - 32 keV spectroscopy, optical spectroscopy, and broad-band ultraviolet-to-mid-infrared spectral energy distribution analyses. We find that the dominant source population is quasars with L(sub 10-40 keV) greater than 10(exp 44) erg per second, of which approximately 50% are obscured with N(sub H) approximately greater than 10(exp 22) per square centimeters. However, none of the 10 NuSTAR sources are Compton thick (N(sub H) approximately greater than 10(exp 24) per square centimeters) and we place a 90% confidence upper limit on the fraction of Compton-thick quasars (L(sub 10-40 keV) greater than 10(exp 44) erg per second) selected at approximately greater than 10 keV of approximately less than 33% over the redshift range z = 0.5 - 1.1. We jointly fitted the rest-frame approximately equal to 10-40 keV data for all of the non-beamed sources with L(sub 10-40 keV) greater than 10(exp 43) erg per second to constrain the average strength of reflection; we find R less than 1.4 for gamma = 1.8, broadly consistent with that found for local active galactic nuclei (AGNs) observed at approximately greater than 10 keV. We also constrain the host-galaxy masses and find a median stellar mass of approximately 10(exp 11) solar mass, a factor approximately 5 times higher than the median stellar mass of nearby high-energy selected AGNs, which may be at least partially driven by the order of magnitude higher X-ray luminosities of the NuSTAR sources. Within the low source-statistic limitations of our study, our results suggest that the overall properties of the NuSTAR sources are broadly similar to those of nearby high-energy selected AGNs but scaled up in luminosity and mass.
NASA Astrophysics Data System (ADS)
Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.
2008-07-01
Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.
NASA Astrophysics Data System (ADS)
Lund, K. E.; Young, K. L.
2004-05-01
Heavy metal contamination in High Arctic systems is of growing concern. Studies have been conducted measuring long range and large point source pollutants, but little research has been done on small point sources such as municipal waste disposal sites. Many Arctic communities are coastal, and local people consume marine wildlife in which concentrations of heavy metals can accumulate. Waste disposal sites are often located in very close proximity to the coastline and leaching of these metals could contaminate food sources on a local scale. Cadmium and lead are the metals focussed on by this study, as the Northern Contaminants Program recognizes them as metals of concern. During the summer of 2003 a study was conducted near Resolute, Nunavut, Canada, to determine the extent of cadmium and lead leaching from a local dumpsite to an adjacent wetland. The ultimate fate of these contaminants is approximately 1 km downslope in the ocean. Transects covering an area of 0.3 km2 were established downslope from the point of disposal and water and soil samples were collected and analyzed for cadmium and lead. Only trace amounts of cadmium and lead were found in the water samples. In the soil samples, low uniform concentrations of cadmium were found that were slightly above background levels, except for adjacent to the point of waste input where higher concentrations were found. Lead soil concentrations were higher than cadmium and varied spatially with soil material and moisture. Overall, excessive amounts of cadmium and lead contamination do not appear to be entering the marine ecosystem. However, soil material and moisture should be considered when establishing waste disposal sites in the far north
Gradients estimation from random points with volumetric tensor in turbulence
NASA Astrophysics Data System (ADS)
Watanabe, Tomoaki; Nagata, Koji
2017-12-01
We present an estimation method of fully-resolved/coarse-grained gradients from randomly distributed points in turbulence. The method is based on a linear approximation of spatial gradients expressed with the volumetric tensor, which is a 3 × 3 matrix determined by a geometric distribution of the points. The coarse grained gradient can be considered as a low pass filtered gradient, whose cutoff is estimated with the eigenvalues of the volumetric tensor. The present method, the volumetric tensor approximation, is tested for velocity and passive scalar gradients in incompressible planar jet and mixing layer. Comparison with a finite difference approximation on a Cartesian grid shows that the volumetric tensor approximation computes the coarse grained gradients fairly well at a moderate computational cost under various conditions of spatial distributions of points. We also show that imposing the solenoidal condition improves the accuracy of the present method for solenoidal vectors, such as a velocity vector in incompressible flows, especially when the number of the points is not large. The volumetric tensor approximation with 4 points poorly estimates the gradient because of anisotropic distribution of the points. Increasing the number of points from 4 significantly improves the accuracy. Although the coarse grained gradient changes with the cutoff length, the volumetric tensor approximation yields the coarse grained gradient whose magnitude is close to the one obtained by the finite difference. We also show that the velocity gradient estimated with the present method well captures the turbulence characteristics such as local flow topology, amplification of enstrophy and strain, and energy transfer across scales.
NASA Astrophysics Data System (ADS)
Yuan, Li-Yun; Xiang, Yu; Lu, Jing; Jiang, Hong-Hua
2015-12-01
Based on the transfer matrix method of exploring the circular cylindrical shell treated with active constrained layer damping (i.e., ACLD), combined with the analytical solution of the Helmholtz equation for a point source, a multi-point multipole virtual source simulation method is for the first time proposed for solving the acoustic radiation problem of a submerged ACLD shell. This approach, wherein some virtual point sources are assumed to be evenly distributed on the axial line of the cylindrical shell, and the sound pressure could be written in the form of the sum of the wave functions series with the undetermined coefficients, is demonstrated to be accurate to achieve the radiation acoustic pressure of the pulsating and oscillating spheres respectively. Meanwhile, this approach is proved to be accurate to obtain the radiation acoustic pressure for a stiffened cylindrical shell. Then, the chosen number of the virtual distributed point sources and truncated number of the wave functions series are discussed to achieve the approximate radiation acoustic pressure of an ACLD cylindrical shell. Applying this method, different radiation acoustic pressures of a submerged ACLD cylindrical shell with different boundary conditions, different thickness values of viscoelastic and piezoelectric layer, different feedback gains for the piezoelectric layer and coverage of ACLD are discussed in detail. Results show that a thicker thickness and larger velocity gain for the piezoelectric layer and larger coverage of the ACLD layer can obtain a better damping effect for the whole structure in general. Whereas, laying a thicker viscoelastic layer is not always a better treatment to achieve a better acoustic characteristic. Project supported by the National Natural Science Foundation of China (Grant Nos. 11162001, 11502056, and 51105083), the Natural Science Foundation of Guangxi Zhuang Autonomous Region, China (Grant No. 2012GXNSFAA053207), the Doctor Foundation of Guangxi University of Science and Technology, China (Grant No. 12Z09), and the Development Project of the Key Laboratory of Guangxi Zhuang Autonomous Region, China (Grant No. 1404544).
Nguyen, Hai M.; Matsumoto, Jumpei; Tran, Anh H.; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
Previous studies have reported that multiple brain regions are activated during spatial navigation. However, it is unclear whether these activated brain regions are specifically associated with spatial updating or whether some regions are recruited for parallel cognitive processes. The present study aimed to localize current sources of event related potentials (ERPs) associated with spatial updating specifically. In the control phase of the experiment, electroencephalograms (EEGs) were recorded while subjects sequentially traced 10 blue checkpoints on the streets of a virtual town, which were sequentially connected by a green line, by manipulating a joystick. In the test phase of the experiment, the checkpoints and green line were not indicated. Instead, a tone was presented when the subjects entered the reference points where they were then required to trace the 10 invisible spatial reference points corresponding to the checkpoints. The vertex-positive ERPs with latencies of approximately 340 ms from the moment when the subjects entered the unmarked reference points were significantly larger in the test than in the control phases. Current source density analysis of the ERPs by standardized low-resolution brain electromagnetic tomography (sLORETA) indicated activation of brain regions in the test phase that are associated with place and landmark recognition (entorhinal cortex/hippocampus, parahippocampal and retrosplenial cortices, fusiform, and lingual gyri), detecting self-motion (posterior cingulate and posterior insular cortices), motor planning (superior frontal gyrus, including the medial frontal cortex), and regions that process spatial attention (inferior parietal lobule). The present results provide the first identification of the current sources of ERPs associated with spatial updating, and suggest that multiple systems are active in parallel during spatial updating. PMID:24624067
NASA Astrophysics Data System (ADS)
Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.
2016-12-01
Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.
NASA Astrophysics Data System (ADS)
Stoll, R., II; Christen, A.; Mahaffee, W.; Salesky, S.; Therias, A.; Caitlin, S.
2017-12-01
Pollution in the form of small particles has a strong impact on a wide variety of urban processes that play an important role in the function of urban ecosystems and ultimately human health and well-being. As a result, a substantial body of research exists on the sources, sinks, and transport characteristics of urban particulate matter. Most of the existing experimental work examining point sources employed gases (e.g., SF6) as the working medium. Furthermore, the focus of most studies has been on the dispersion of pollutants far from the source location. Here, our focus is on the turbulent dispersion of heavy particles in the near source region of a suburban neighborhood. To this end, we conducted a series of heavy particle releases in the Sunset neighborhood of Vancouver, Canada during June, 2017. The particles where dispersed from a near ground point source at two different locations. The Sunset neighborhood is composed mostly of single dwelling detached houses and has been used in numerous previous urban studies. One of the release points was just upwind of a 4-way intersection and the other in the middle of a contiguous block of houses. Each location had a significant density of trees. A minimum of four different successful release events were conducted at each site. During each release, fluorescing micro particles (mean diameter approx. 30 micron) were released from ultrasonic atomizer nozzles for a duration of approximately 20 minutes. The particles where sampled at 50 locations (1.5 m height) in the area downwind of the release over distances from 1-15 times the mean canopy height ( 6 m) using rotating impaction traps. In addition to the 50 sampler locations, instantaneous wind velocities were measured with eight sonic anemometers distributed horizontally and vertically throughout the release area. The resulting particle plume distributions indicate a strong impact of local urban form in the near source region and a high degree of sensitivity to the local wind direction measured from the sonic anemometers. In addition to presenting the experimental data, initial comparisons to a Lagrangian particle dispersion model driven by a mass consistent diagnostic wind field will be presented.
NASA Technical Reports Server (NTRS)
Gotthelf, E. V.; Tomsick, J. A.; Halpern, J. P.; Gelfand, J. D.; Harrison, F. A.; Boggs, S. E.; Christensen, F. E.; Craig, W. W.; Hailey, J. C.; Kaspi, V. M.;
2014-01-01
We report the discovery of a 206 ms pulsar associated with the TeV gamme-ray source HESS J1640-465 using the Nuclear Spectroscopic Telescope Array (NuSTAR) X-ray observatory. PSR J1640-4631 lies within the shelltype supernova remnant (SNR) G338.3-0.0, and coincides with an X-ray point source and putative pulsar wind nebula (PWN) previously identified in XMM-Newton and Chandra images. It is spinning down rapidly with period derivative P = 9.758(44) × 10(exp -13), yielding a spin-down luminosity E = 4.4 × 10(exp 36) erg s(exp -1), characteristic age tau(sub c) if and only if P/2 P = 3350 yr, and surface dipole magnetic field strength B(sub s) = 1.4×10(exp 13) G. For the measured distance of 12 kpc to G338.3-0.0, the 0.2-10 TeV luminosity of HESS J1640-465 is 6% of the pulsar's present E. The Fermi source 1FHL J1640.5-4634 is marginally coincident with PSR J1640-4631, but we find no gamma-ray pulsations in a search using five years of Fermi Large Area Telescope (LAT) data. The pulsar energetics support an evolutionary PWN model for the broadband spectrum of HESS J1640-465, provided that the pulsar's braking index is n approximately equal to 2, and that its initial spin period was P(sub 0) approximately 15 ms.
Point Spread Function of ASTRO-H Soft X-Ray Telescope (SXT)
NASA Technical Reports Server (NTRS)
Hayashi, Takayuki; Sato, Toshiki; Kikuchi, Naomichi; Iizuka, Ryo; Maeda, Yoshitomo; Ishida, Manabu; Kurashima, Sho; Nakaniwa, Nozomi; Okajima, Takashi; Mori, Hideyuki;
2016-01-01
ASTRO-H (Hitomi) satellite equips two Soft X-ray Telescopes (SXTs), one of which (SXT-S) is coupled to Soft-X-ray Spectrometer (SXS) while the other (SXT-I) is coupled to Soft X-ray Imager (SXI). Although SXTs are lightweight of approximately 42 kgmodule1 and have large on-axis effective area (EA) of approximately 450 cm(exp 2) at 4.5 keV module(sub 1) by themselves, their angular resolutions are moderate approximately 1.2 arcmin in half power diameter. The amount of contamination into the SXS FOV (3.05 times 3.05 arcmin(exp 2) from nearby sources was measured in the ground-based calibration at the beamline in Institute of Space and Astronautical Science. The contamination at 4.5 keV were measured with sources distant from the SXS center by one width of the FOV in perpendicular and diagonal directions, that is, 3 and 4.5 arcmin-off, respectively. The average EA of the contamination in the four directions with the 3 and 4.5 arcmin-off were measured to be 2 and 0.6% of the on-axis EA of 412 cm (exp) for the SXS FOV, respectively. The contamination from a source distant by two FOV widths in a diagonal direction, that is, 8.6 arcmin-off was measured to be 0.1% of the on-axis at 4.5 keV. The contamination amounts were also measured at 1.5 keV and 8.0 keV which indicated that the ratio of the contamination EA to that of on-axis hardly depended on the source energy. The off-axis SXT-I images from 4.5 to 27 arcmin were acquired at intervals of -4.5 arcmin for the SXI FOV of 38 times 38 arcmin(exp 2). The image shrinked as the off-axis angle increased. Above 13.5 arcmin of off-angle, a stray appeared around the image center in the off-axis direction. As for the on-axis image, a ring-shaped stray appeared at the edge of SXI of approximately 18 arcmin distant from the image center.
Transient resonances in the inspirals of point particles into black holes.
Flanagan, Eanna E; Hinderer, Tanja
2012-08-17
We show that transient resonances occur in the two-body problem in general relativity for spinning black holes in close proximity to one another when one black hole is much more massive than the other. These resonances occur when the ratio of polar and radial orbital frequencies, which is slowly evolving under the influence of gravitational radiation reaction, passes through a low order rational number. At such points, the adiabatic approximation to the orbital evolution breaks down, and there is a brief but order unity correction to the inspiral rate. The resonances cause a perturbation to orbital phase of order a few tens of cycles for mass ratios ∼10(-6), make orbits more sensitive to changes in initial data (though not quite chaotic), and are genuine nonperturbative effects that are not seen at any order in a standard post-Newtonian expansion. Our results apply to an important potential source of gravitational waves, the gradual inspiral of white dwarfs, neutron stars, or black holes into much more massive black holes. Resonances' effects will increase the computational challenge of accurately modeling these sources.
Influence of Mean-Density Gradient on Small-Scale Turbulence Noise
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
2000-01-01
A physics-based methodology is described to predict jet-mixing noise due to small-scale turbulence. Both self- and shear-noise source teens of Lilley's equation are modeled and the far-field aerodynamic noise is expressed as an integral over the jet volume of the source multiplied by an appropriate Green's function which accounts for source convection and mean-flow refraction. Our primary interest here is to include transverse gradients of the mean density in the source modeling. It is shown that, in addition to the usual quadrupole type sources which scale to the fourth-power of the acoustic wave number, additional dipole and monopole sources are present that scale to lower powers of wave number. Various two-point correlations are modeled and an approximate solution to noise spectra due to multipole sources of various orders is developed. Mean flow and turbulence information is provided through RANS-k(epsilon) solution. Numerical results are presented for a subsonic jet at a range of temperatures and Mach numbers. Predictions indicated a decrease in high frequency noise with added heat, while changes in the low frequency noise depend on jet velocity and observer angle.
Apparatus and method to compensate for refraction of radiation
Allen, Gary R.; Moskowitz, Philip E.
1990-01-01
An apparatus to compensate for refraction of radiation passing through a curved wall of an article is provided. The apparatus of a preferred embodiment is particularly advantageous for use in arc tube discharge diagnostics. The apparatus of the preferred embodiment includes means for pre-refracting radiation on a predetermined path by an amount equal and inverse to refraction which occurs when radiation passes through a first wall of the arc tube such that, when the radiation passes through the first wall of the arc tube and into the cavity thereof, the radiation passes through the cavity approximately on the predetermined path; means for releasably holding the article such that the radiation passes through the cavity thereof; and means for post-refracting radiation emerging from a point of the arc tube opposite its point of entry by an amount equal and inverse to refraction which occurs when radiation emerges from the arc tube. In one embodiment the means for pre-refracting radiation includes a first half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a first cylindrical lens, the first half tube being mounted with its concave side facing the radiation source and the first cylindrical lens being mounted between the first half tube and the arc tube and the means for post-refracting radiation includes a second half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a second cylindrical lens, the second half tube being mounted with its convex side facing the radiation source and the second cylindrical lens being mounted between the arc tube and the second half tube. Methods to compensate for refraction of radiation passing into and out of an arc tube is also provided.
Apparatus and method to compensate for refraction of radiation
Allen, G.R.; Moskowitz, P.E.
1990-03-27
An apparatus to compensate for refraction of radiation passing through a curved wall of an article is provided. The apparatus of a preferred embodiment is particularly advantageous for use in arc tube discharge diagnostics. The apparatus of the preferred embodiment includes means for pre-refracting radiation on a predetermined path by an amount equal and inverse to refraction which occurs when radiation passes through a first wall of the arc tube such that, when the radiation passes through the first wall of the arc tube and into the cavity thereof, the radiation passes through the cavity approximately on the predetermined path; means for releasably holding the article such that the radiation passes through the cavity thereof; and means for post-refracting radiation emerging from a point of the arc tube opposite its point of entry by an amount equal and inverse to refraction which occurs when radiation emerges from the arc tube. In one embodiment the means for pre-refracting radiation includes a first half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a first cylindrical lens, the first half tube being mounted with its concave side facing the radiation source and the first cylindrical lens being mounted between the first half tube and the arc tube and the means for post-refracting radiation includes a second half tube comprising a longitudinally bisected tube obtained from a tube which is approximately identical to the arc tube's cylindrical portion and a second cylindrical lens, the second half tube being mounted with its convex side facing the radiation source and the second cylindrical lens being mounted between the arc tube and the second half tube. Methods to compensate for refraction of radiation passing into and out of an arc tube is also provided. 4 figs.
Gaia Data Release 1. Pre-processing and source list creation
NASA Astrophysics Data System (ADS)
Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.
2016-11-01
Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.
NASA Astrophysics Data System (ADS)
Gomez-Gonzalez, J. M.; Mellors, R.
2007-05-01
We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Sekiguchi, H.
2011-12-01
We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.
NASA Astrophysics Data System (ADS)
Kolesnikov, E. K.; Klyushnikov, G. N.
2018-05-01
In the paper we continue the study of precipitation regions of high-energy charged particles, carried out by the authors since 2002. In contrast to previous papers, where a stationary source of electrons was considered, it is assumed that the source moves along a low circular near-earth orbit with a constant velocity. The orbit position is set by the inclination angle of the orbital plane to the equatorial plane and the longitude of the ascending node. The total number of injected electrons is determined by the source strength and the number of complete revolutions that the source makes along the circumference. Construction of precipitation regions is produced using the computational algorithm based on solving of the system of ordinary differential equations. The features of the precipitation regions structure for the dipole approximation of the geomagnetic field and the symmetrical arrangement of the orbit relative to the equator are noted. The dependencies of the precipitation regions on different orbital parametres such as the incline angle, the ascending node position and kinetic energy of injected particles have been considered.
NASA Technical Reports Server (NTRS)
Gottlieb, David; Shu, Chi-Wang
1994-01-01
The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different approximations including the higher-order reliability methods (HORM) for representing the failure surface. This report is divided into several parts to emphasize different segments of the structural reliability analysis and design. Broadly, it consists of mathematical foundations, methods and applications. Chapter I discusses the fundamental definitions of the probability theory, which are mostly available in standard text books. Probability density function descriptions relevant to this work are addressed. In Chapter 2, the concept and utility of function approximation are discussed for a general application in engineering analysis. Various forms of function representations and the latest developments in nonlinear adaptive approximations are presented with comparison studies. Research work accomplished in reliability analysis is presented in Chapter 3. First, the definition of safety index and most probable point of failure are introduced. Efficient ways of computing the safety index with a fewer number of iterations is emphasized. In chapter 4, the probability of failure prediction is presented using first-order, second-order and higher-order methods. System reliability methods are discussed in chapter 5. Chapter 6 presents optimization techniques for the modification and redistribution of structural sizes for improving the structural reliability. The report also contains several appendices on probability parameters.
NASA Technical Reports Server (NTRS)
Acero, F.; Ackermann, M.; Ajello, M.; Albert, A.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Brandt, T. J.;
2016-01-01
Most of the celestial gamma rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM),which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20deg and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within approximately 4deg of the Galactic Center.
Evaluation of approximate methods for the prediction of noise shielding by airframe components
NASA Technical Reports Server (NTRS)
Ahtye, W. F.; Mcculley, G.
1980-01-01
An evaluation of some approximate methods for the prediction of shielding of monochromatic sound and broadband noise by aircraft components is reported. Anechoic-chamber measurements of the shielding of a point source by various simple geometric shapes were made and the measured values compared with those calculated by the superposition of asymptotic closed-form solutions for the shielding by a semi-infinite plane barrier. The shields used in the measurements consisted of rectangular plates, a circular cylinder, and a rectangular plate attached to the cylinder to simulate a wing-body combination. The normalized frequency, defined as a product of the acoustic wave number and either the plate width or cylinder diameter, ranged from 4.6 to 114. Microphone traverses in front of the rectangular plates and cylinders generally showed a series of diffraction bands that matched those predicted by the approximate methods, except for differences in the magnitudes of the attenuation minima which can be attributed to experimental inaccuracies. The shielding of wing-body combinations was predicted by modifications of the approximations used for rectangular and cylindrical shielding. Although the approximations failed to predict diffraction patterns in certain regions, they did predict the average level of wing-body shielding with an average deviation of less than 3 dB.
Herdic, Peter C; Houston, Brian H; Marcus, Martin H; Williams, Earl G; Baz, Amr M
2005-06-01
The surface and interior response of a Cessna Citation fuselage section under three different forcing functions (10-1000 Hz) is evaluated through spatially dense scanning measurements. Spatial Fourier analysis reveals that a point force applied to the stiffener grid provides a rich wavenumber response over a broad frequency range. The surface motion data show global structural modes (approximately < 150 Hz), superposition of global and local intrapanel responses (approximately 150-450 Hz), and intrapanel motion alone (approximately > 450 Hz). Some evidence of Bloch wave motion is observed, revealing classical stop/pass bands associated with stiffener periodicity. The interior response (approximately < 150 Hz) is dominated by global structural modes that force the interior cavity. Local intrapanel responses (approximately > 150 Hz) of the fuselage provide a broadband volume velocity source that strongly excites a high density of interior modes. Mode coupling between the structural response and the interior modes appears to be negligible due to a lack of frequency proximity and mismatches in the spatial distribution. A high degree-of-freedom finite element model of the fuselage section was developed as a predictive tool. The calculated response is in good agreement with the experimental result, yielding a general model development methodology for accurate prediction of structures with moderate to high complexity.
NASA Technical Reports Server (NTRS)
Manos, P.; Turner, L. R.
1972-01-01
Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.
The Sedov Blast Wave as a Radial Piston Verification Test
Pederson, Clark; Brown, Bart; Morgan, Nathaniel
2016-06-22
The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less
The Osher scheme for non-equilibrium reacting flows
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
An extension of the Osher upwind scheme to nonequilibrium reacting flows is presented. Owing to the presence of source terms, the Riemann problem is no longer self-similar and therefore its approximate solution becomes tedious. With simplicity in mind, a linearized approach which avoids an iterative solution is used to define the intermediate states and sonic points. The source terms are treated explicitly. Numerical computations are presented to demonstrate the feasibility, efficiency and accuracy of the proposed method. The test problems include a ZND (Zeldovich-Neumann-Doring) detonation problem for which spurious numerical solutions which propagate at mesh speed have been observed on coarse grids. With the present method, a change of limiter causes the solution to change from the physically correct CJ detonation solution to the spurious weak detonation solution.
NASA Technical Reports Server (NTRS)
Rice, R. C.; Reynolds, J. L.
1976-01-01
Fatigue, fatigue-crack-propagation, and fracture data compiled and stored on magnetic tape are documented. Data for 202 and 7075 aluminum alloys, Ti-6Al-4V titanium alloy, and 300M steel are included in the compilation. Approximately 4,500 fatigue, 6,500 fatigue-crack-propagation, and 1,500 fracture data points are stored on magnetic tape. Descriptions of the data, an index to the data on the magnetic tape, information on data storage format on the tape, a listing of all data source references, and abstracts of other pertinent test information from each data source reference are included.
Multi-channel Analysis of Passive Surface Waves (MAPS)
NASA Astrophysics Data System (ADS)
Xia, J.; Cheng, F. Mr; Xu, Z.; Wang, L.; Shen, C.; Liu, R.; Pan, Y.; Mi, B.; Hu, Y.
2017-12-01
Urbanization is an inevitable trend in modernization of human society. In the end of 2013 the Chinese Central Government launched a national urbanization plan—"Three 100 Million People", which aggressively and steadily pushes forward urbanization. Based on the plan, by 2020, approximately 100 million people from rural areas will permanently settle in towns, dwelling conditions of about 100 million people in towns and villages will be improved, and about 100 million people in the central and western China will permanently settle in towns. China's urbanization process will run at the highest speed in the urbanization history of China. Environmentally friendly, non-destructive and non-invasive geophysical assessment method has played an important role in the urbanization process in China. Because human noise and electromagnetic field due to industrial life, geophysical methods already used in urban environments (gravity, magnetics, electricity, seismic) face great challenges. But humanity activity provides an effective source of passive seismic methods. Claerbout pointed out that wavefileds that are received at one point with excitation at the other point can be reconstructed by calculating the cross-correlation of noise records at two surface points. Based on this idea (cross-correlation of two noise records) and the virtual source method, we proposed Multi-channel Analysis of Passive Surface Waves (MAPS). MAPS mainly uses traffic noise recorded with a linear receiver array. Because Multi-channel Analysis of Surface Waves can produces a shear (S) wave velocity model with high resolution in shallow part of the model, MPAS combines acquisition and processing of active source and passive source data in a same flow, which does not require to distinguish them. MAPS is also of ability of real-time quality control of noise recording that is important for near-surface applications in urban environment. The numerical and real-world examples demonstrated that MAPS can be used for accurate and fast imaging of high-frequency surface wave energy, and some examples also show that high quality imaging similar to those with active sources can be generated only by the use of a few minutes of noise. The use of cultural noise in town, MAPS can image S-wave velocity structure from the ground surface to hundreds of meters depth.
The derived population of luminous supersoft X-ray sources
NASA Technical Reports Server (NTRS)
Di Stefano, R.STEFANO; Rappaport, S.
1994-01-01
The existence of a new class of astrophysical object, luminous supersoft X-ray sources, has been established through ROSAT satellite observations and analysis during the past approximately 3 yr. Because most of the radiation emitted by supersoft sources spans a range of wavelengths readily absorbed by interstellar gas, a substantial fraction of these sources may not be detectable with present satellite instrumentation. It is therefore important to derive a reliable estimate of the underlying population, based on the approximately 30 sources that have been observed to date. The work reported here combines the observational results with a theoretical analysis, to obtain an estimate of the total number of sources likely to be present in M31, the Magellanic Clouds, and in our own Galaxy. We find that in M31, where approximately 15 supersoft sources have been identified and roughly an equal number of sources are being investigated as supersoft candidates, there are likely to be approximately 2500 active supersoft sources at the present time. In our own Galaxy, where about four supersoft sources have been detected in the Galactic plane, there are likely to be approximately 1000 active sources. Similarly, with about six and about four (nonforeground) sources observed in the Large (LMC) and Small Magellanic Clouds (SMC), respectively, there should be approximately 30 supersoft sources in the LMC, and approximately 20 in the SMC. The likely uncertainties in the numbers quoted above, and the properties of observable sources relative to those of the total underlying population, are also derived in detail. These results can be scaled to estimate the numbers of supersoft sources likely to be present in other galaxies. The results reported here on the underlying population of supersoft X-ray sources are in good agreement with the results of a prior population synthesis study of the white dwarf accretor model for luminous supersoft X-ray sources. It should be emphasized, however, that the questions asked in these two investigations are distinct, that the approaches taken to answer these questions are largely independent and that the findings of these two studies could in principle have been quite different.
Use of point-of-sale data to assess food and nutrient quality in remote stores.
Brimblecombe, Julie; Liddle, Robyn; O'Dea, Kerin
2013-07-01
To examine the feasibility of using point-of-sale data to assess dietary quality of food sales in remote stores. A multi-site cross-sectional assessment of food and nutrient composition of food sales. Point-of-sale data were linked to Australian Food and Nutrient Data and compared across study sites and with nutrient requirements. Remote Aboriginal Australia. Six stores. Point-of-sale data were readily available and provided a low-cost, efficient and objective assessment of food and nutrient sales. Similar patterns in macronutrient distribution, food expenditure and key food sources of nutrients were observed across stores. In all stores, beverages, cereal and cereal products, and meat and meat products comprised approximately half of food sales (range 49–57 %). Fruit and vegetable sales comprised 10.4 (SD 1.9) % on average. Carbohydrate contributed 54.4 (SD 3.0) % to energy; protein 13.5 (SD 1.1) %; total sugars 28.9 (SD 4.3) %; and the contribution of total saturated fat to energy ranged from 11.0 to 14.4% across stores. Mg, Ca, K and fibre were limiting nutrients, and Na was four to five times higher than the midpoint of the average intake range. Relatively few foods were major sources of nutrients. Point-of-sale data enabled an assessment of dietary quality within stores and across stores with no burden on communities and at no cost, other than time required for analysis and reporting. Similar food spending patterns and nutrient profiles were observed across the six stores. This suggests potential in using point-of-sale data to monitor and evaluate dietary quality in remote Australian communities.
Tympanic thermometer performance validation by use of a body-temperature fixed point blackbody
NASA Astrophysics Data System (ADS)
Machin, Graham; Simpson, Robert
2003-04-01
The use of infrared tympanic thermometers within the medical community (and more generically in the public domain) has recently grown rapidly, displacing more traditional forms of thermometry such as mercury-in-glass. Besides the obvious health concerns over mercury the increase in the use of tympanic thermometers is related to a number of factors such as their speed and relatively non-invasive method of operation. The calibration and testing of such devices is covered by a number of international standards (ASTM1, prEN2, JIS3) which specify the design of calibration blackbodies. However these calibration sources are impractical for day-to-day in-situ validation purposes. In addition several studies (e.g. Modell et al4, Craig et al5) have thrown doubt on the accuracy of tympanic thermometers in clinical use. With this in mind the NPL is developing a practical, portable and robust primary reference fixed point source for tympanic thermometer validation. The aim of this simple device is to give the clinician a rapid way of validating the performance of their tympanic thermometer, enabling the detection of mal-functioning thermometers and giving confidence in the measurement to the clinician (and patient!) at point of use. The reference fixed point operates at a temperature of 36.3 °C (97.3 °F) with a repeatability of approximately +/- 20 mK. The fixed-point design has taken into consideration the optical characteristics of tympanic thermometers enabling wide-angled field of view devices to be successfully tested. The overall uncertainty of the device is estimated to be is less than 0.1°C. The paper gives a description of the fixed point, its design and construction as well as the results to date of validation tests.
Source-water susceptibility assessment in Texas—Approach and methodology
Ulery, Randy L.; Meyer, John E.; Andren, Robert W.; Newson, Jeremy K.
2011-01-01
Public water systems provide potable water for the public's use. The Safe Drinking Water Act amendments of 1996 required States to prepare a source-water susceptibility assessment (SWSA) for each public water system (PWS). States were required to determine the source of water for each PWS, the origin of any contaminant of concern (COC) monitored or to be monitored, and the susceptibility of the public water system to COC exposure, to protect public water supplies from contamination. In Texas, the Texas Commission on Environmental Quality (TCEQ) was responsible for preparing SWSAs for the more than 6,000 public water systems, representing more than 18,000 surface-water intakes or groundwater wells. The U.S. Geological Survey (USGS) worked in cooperation with TCEQ to develop the Source Water Assessment Program (SWAP) approach and methodology. Texas' SWAP meets all requirements of the Safe Drinking Water Act and ultimately provides the TCEQ with a comprehensive tool for protection of public water systems from contamination by up to 247 individual COCs. TCEQ staff identified both the list of contaminants to be assessed and contaminant threshold values (THR) to be applied. COCs were chosen because they were regulated contaminants, were expected to become regulated contaminants in the near future, or were unregulated but thought to represent long-term health concerns. THRs were based on maximum contaminant levels from U.S. Environmental Protection Agency (EPA)'s National Primary Drinking Water Regulations. For reporting purposes, COCs were grouped into seven contaminant groups: inorganic compounds, volatile organic compounds, synthetic organic compounds, radiochemicals, disinfection byproducts, microbial organisms, and physical properties. Expanding on the TCEQ's definition of susceptibility, subject-matter expert working groups formulated the SWSA approach based on assumptions that natural processes and human activities contribute COCs in quantities that vary in space and time; that increased levels of COC-producing activities within a source area may increase susceptibility to COC exposure; and that natural and manmade conditions within the source area may increase, decrease, or have no observable effect on susceptibility to COC exposure. Incorporating these assumptions, eight SWSA components were defined: identification, delineation, intrinsic susceptibility, point- and nonpoint-source susceptibility, contaminant occurrence, area-of-primary influence, and summary components. Spatial datasets were prepared to represent approximately 170 attributes or indicators used in the assessment process. These primarily were static datasets (approximately 46 gigabytes (GB) in size). Selected datasets such as PWS surface-water-intake or groundwater-well locations and potential source of contamination (PSOC) locations were updated weekly. Completed assessments were archived, and that database is approximately 10 GB in size. SWSA components currently (2011) are implemented in the Source Water Assessment Program-Decision Support System (SWAP-DSS) computer software, specifically developed to produce SWSAs. On execution of the software, the components work to identify the source of water for the well or intake, assess intrinsic susceptibility of the water- supply source, assess susceptibility to contamination with COCs from point and nonpoint sources, identify any previous detections of COCs from existing water-quality databases, and summarize the results. Each water-supply source's susceptibility is assessed, source results are weighted by source capacity (when a PWS has multiple sources), and results are combined into a single SWSA for the PWS.'SWSA reports are generated using the software; during 2003, more than 6,000 reports were provided to PWS operators and the public. The ability to produce detailed or summary reports for individual sources, and detailed or summary reports for a PWS, by COC or COC group was a unique capability of SWAP-DSS. In 2004, the TCEQ began a rotating schedule for SWSA wherein one-third of PWSs statewide would be assessed annually, or sooner if protection-program activities deemed it necessary, and that schedule has continued to the present. Cooperative efforts by the TCEQ and the USGS for SWAP software maintenance and enhancements ended in 2011 with the TCEQ assuming responsibility for all tasks.
Nanoseismic sources made in the laboratory: source kinematics and time history
NASA Astrophysics Data System (ADS)
McLaskey, G.; Glaser, S. D.
2009-12-01
When studying seismic signals in the field, the analysis of source mechanisms is always obscured by propagation effects such as scattering and reflections due to the inhomogeneous nature of the earth. To get around this complication, we measure seismic waves (wavelengths from 2 mm to 300 mm) in laboratory-sized specimens of extremely homogeneous isotropic materials. We are able to study the focal mechanism and time history of nanoseismic sources produced by fracture, impact, and sliding friction, roughly six orders of magnitude smaller and more rapid than typical earthquakes. Using very sensitive broadband conical piezoelectric sensors, we are able to measure surface normal displacements down to a few pm (10^-12 m) in amplitude. Thick plate specimens of homogeneous materials such as glass, steel, gypsum, and polymethylmethacrylate (PMMA) are used as propagation media in the experiments. Recorded signals are in excellent agreement with theoretically determined Green’s functions obtained from a generalized ray theory code for an infinite plate geometry. Extremely precise estimates of the source time history are made via full waveform inversion from the displacement time histories recorded by an array of at least ten sensors. Each channel is sampled at a rate of 5 MHz. The system is absolutely calibrated using the normal impact of a tiny (~1 mm) ball on the surface of the specimen. The ball impact induces a force pulse into the specimen a few ms in duration. The amplitude, duration, and shape of the force pulse were found to be well approximated by Hertzian-derived impact theory, while the total change in momentum of the ball is independently measured from its incoming and rebound velocities. Another calibration source, the sudden fracture of a thin-walled glass capillary tube laid on its side and loaded against the surface of the specimen produces a similar point force, this time with a source function very nearly a step in time with rise time of less than 500 ns. The force at which the capillary breaks is recorded using a force sensor and is used for absolute calibration. A third set of nanoseismic sources were generated from frictional sliding. In this case, the location and spatial extent of the source along the cm-scale fault is not precisely known and must be determined. These sources are much more representative of earthquakes and the determination of their focal mechanisms is the subject of ongoing research. Sources of this type have been observed on a great range of time scales with rise times ranging from 500 ns to hundreds of ms. This study tests the generality of the seismic source representation theory. The unconventional scale, geometry, and experimental arrangement facilitates the discussion of issues such as the point source approximation, the origin of uncertainty in moment tensor inversions, the applicability of magnitude calculations for non-double-couple sources, and the relationship between momentum and seismic moment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... point at which it is crossed by the existing BPA electrical transmission line; thence southeasterly along the BPA transmission line approximately 8 miles to point of the crossing of the south fork of the... approximately 6 miles to the point the Creek is crossed by the existing BPA electrical transmission line; thence...
Wilkison, Donald H.; Armstrong, Daniel J.; Hampton, Sarah A.
2009-01-01
Water-quality and ecological character and trends in the metropolitan Blue River Basin were evaluated from 1998 through 2007 to provide spatial and temporal resolution to factors that affect the quality of water and biota in the basin and provide a basis for assessing the efficacy of long-term combined sewer control and basin management plans. Assessments included measurements of stream discharge, pH, dissolved oxygen, specific conductance, turbidity, nutrients (dissolved and total nitrogen and phosphorus species), fecal-indicator bacteria (Escherichia coli and fecal coliform), suspended sediment, organic wastewater and pharmaceutical compounds, and sources of these compounds as well as the quality of stream biota in the basin. Because of the nature and myriad of factors that affect basin water quality, multiple strategies are needed to decrease constituent loads in streams. Strategies designed to decrease or eliminate combined sewer overflows (CSOs) would substantially reduce the annual loads of nutrients and fecal-indicator bacteria in Brush Creek, but have little effect on Blue River loadings. Nonpoint source reductions to Brush Creek could potentially have an equivalent, if not greater, effect on water quality than would CSO reductions. Nonpoint source reductions could also substantially decrease annual nutrient and bacteria loadings to the Blue River and Indian Creek. Methods designed to decrease nutrient loads originating from Blue River and Indian Creek wastewater treatment plants (WWTPs) could substantially reduce the overall nutrient load in these streams. For the main stem of the Blue River and Indian Creek, primary sources of nutrients were nonpoint source runoff and WWTPs discharges; however, the relative contribution of each source varied depending on how wet or dry the year was and the number of upstream WWTPs. On Brush Creek, approximately two-thirds of the nutrients originated from nonpoint sources and the remainder from CSOs. Nutrient assimilation processes, which reduced total nitrogen loads by approximately 13 percent and total phosphorus loads by double that amount in a 20-kilometer reach of the Blue River during three synoptic base-flow sampling events between August through September 2004 and September 2005, likely are limited to selected periods during any given year and may not substantially reduce annual nutrient loads. Bacteria densities typically increased with increasing urbanization, and bacteria loadings to the Blue River and Indian Creek were almost entirely the result of nonpoint source runoff. WWTPs contributed, on average, less than 1 percent of the bacteria to these reaches, and in areas of the Blue River that had combined sewers, CSOs contributed only minor amounts (less than 2 percent) of the total annual load in 2005. The bulk of the fecal-indicator bacteria load in Brush Creek also originated from nonpoint sources with the remainder from CSOs. From October 2002 through September 2007, estimated daily mean Escherichia coli bacteria density in upper reaches of the Blue River met the State of Missouri secondary contact criterion standard approximately 85 percent of the time. However, in lower Blue River reaches, the same threshold was exceeded approximately 45 percent of the time. The tributary with the greatest number of CSO discharge points, Brush Creek, contributed approximately 10 percent of the bacteria loads to downstream reaches. The tributary Town Fork Creek had median base-flow Escherichia coli densities that were double that of other basin sites and stormflow densities 10 times greater than those in other parts of the basin largely because approximately one-fourth of the runoff in the Town Fork Creek Basin is believed to originate in combined sewers. Genotypic source typing of bacteria indicated that more than half of the bacteria in this tributary originated from human sources with two storms contributing the bulk of all bacteria sourced as human. However, areas outsid
Removing cosmic-ray hits from multiorbit HST Wide Field Camera images
NASA Technical Reports Server (NTRS)
Windhorst, Rogier A.; Franklin, Barbara E.; Neuschaefer, Lyman W.
1994-01-01
We present an optimized algorithm that removes cosmic rays ('CRs') from multiorbit Hubble Space Telescope (HST) Wide Field/Planetary Camera ('WF/PC') images. It computes the image noise in every iteration from the WF/PC CCD equation. This includes all known sources of random and systematic calibration errors. We test this algorithm on WF/PC stacks of 2-12 orbits as a function of the number of available orbits and the formal Poissonian sigma-clipping level. We find that the algorithm needs greater than or equal 4 WF/PC exposures to locate the minimal sky signal (which is noticeably affected by CRs), with an optimal clipping level at 2-2.5 x sigma(sub Poisson). We analyze the CR flux detected on multiorbit 'CR stacks,' which are constructed by subtracting the best CR filtered images from the unfiltered 8-12 orbit average. We use an automated object finder to determine the surface density of CRS as a function of the apparent magnitude (or ADU flux) they would have generated in the images had they not been removed. The power law slope of the CR 'counts' (gamma approximately = 0.6 for N(m) m(exp gamma)) is steeper than that of the faint galaxy counts down to V approximately = 28 mag. The CR counts show a drop off between 28 less than or approximately V less than or approximately 30 mag (the latter is our formal 2 sigma point source sensitivity without spherical aberration). This prevents the CR sky integral from diverging, and is likely due to a real cutoff in the CR energy distribution below approximately 11 ADU per orbit. The integral CR surface density is less than or approximately 10(exp 8)/sq. deg, and their sky signal is V approximately = 25.5-27.0 mag/sq. arcsec, or 3%-13% of our NEP sky background (V = 23.3 mag/sq. arcsec), and well above the EBL integral of the deepest galaxy counts (B(sub J) approximately = 28.0 mag/sq. arcsec). We conclude that faint CRs will always contribute to the sky signal in the deepest WF/PC images. Since WFPC2 has approximately 2.7x lower read noise and a thicker CCD, this will result in more CR detections than in WF/PC, potentially affecting approximately 10%-20% of the pixels in multiorbit WFPC2 data cubes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, J.A.; Brasseur, G.P.; Zimmerman, P.R.
Using the hydroxyl radical field calibrated to the methyl chloroform observations, the globally averaged release of methane and its spatial and temporal distribution were investigated. Two source function models of the spatial and temporal distribution of the flux of methane to the atmosphere were developed. The first model was based on the assumption that methane is emitted as a proportion of net primary productivity (NPP). With the average hydroxyl radical concentration fixed, the methane source term was computed as {approximately}623 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.3 years. The second model identified source regions for methane frommore » rice paddies, wetlands, enteric fermentation, termites, and biomass burning based on high-resolution land use data. This methane source distribution resulted in an estimate of the global total methane source of {approximately}611 Tg CH{sub 4}, giving an atmospheric lifetime for methane {approximately}8.5 years. The most significant difference between the two models were predictions of methane fluxes over China and South East Asia, the location of most of the world's rice paddies. Using a recent measurement of the reaction rate of hydroxyl radical and methane leads to estimates of the global total methane source for SF1 of {approximately}524 Tg CH{sub 4} giving an atmospheric lifetime of {approximately}10.0 years and for SF2{approximately}514 Tg CH{sub 4} yielding a lifetime of {approximately}10.2 years.« less
10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site
Code of Federal Regulations, 2013 CFR
2013-01-01
...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...
10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site
Code of Federal Regulations, 2010 CFR
2010-01-01
...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...
10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site
Code of Federal Regulations, 2014 CFR
2014-01-01
...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...
10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site
Code of Federal Regulations, 2012 CFR
2012-01-01
...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...
10 CFR Appendix A to Part 861 - Perimeter Description of DOE's Nevada Test Site
Code of Federal Regulations, 2011 CFR
2011-01-01
...°34′20″; Thence easterly approximately 6.73 miles, to a point at latitude 37°20′45″ longitude 116°27′00″; Thence northeasterly approximately 4.94 miles to a point at latitude 37°23′07″, longitude 116°22′30″; Thence easterly approximately 4.81 miles to a point at latitude 37°23′07″, longitude 116°17′15...
On determining dose rate constants spectroscopically
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodriguez, M.; Rogers, D. W. O.
2013-01-15
Purpose: To investigate several aspects of the Chen and Nath spectroscopic method of determining the dose rate constants of {sup 125}I and {sup 103}Pd seeds [Z. Chen and R. Nath, Phys. Med. Biol. 55, 6089-6104 (2010)] including the accuracy of using a line or dual-point source approximation as done in their method, and the accuracy of ignoring the effects of the scattered photons in the spectra. Additionally, the authors investigate the accuracy of the literature's many different spectra for bare, i.e., unencapsulated {sup 125}I and {sup 103}Pd sources. Methods: Spectra generated by 14 {sup 125}I and 6 {sup 103}Pd seedsmore » were calculated in vacuo at 10 cm from the source in a 2.7 Multiplication-Sign 2.7 Multiplication-Sign 0.05 cm{sup 3} voxel using the EGSnrc BrachyDose Monte Carlo code. Calculated spectra used the initial photon spectra recommended by AAPM's TG-43U1 and NCRP (National Council of Radiation Protection and Measurements) Report 58 for the {sup 125}I seeds, or TG-43U1 and NNDC(2000) (National Nuclear Data Center, 2000) for {sup 103}Pd seeds. The emitted spectra were treated as coming from a line or dual-point source in a Monte Carlo simulation to calculate the dose rate constant. The TG-43U1 definition of the dose rate constant was used. These calculations were performed using the full spectrum including scattered photons or using only the main peaks in the spectrum as done experimentally. Statistical uncertainties on the air kerma/history and the dose rate/history were Less-Than-Or-Slanted-Equal-To 0.2%. The dose rate constants were also calculated using Monte Carlo simulations of the full seed model. Results: The ratio of the intensity of the 31 keV line relative to that of the main peak in {sup 125}I spectra is, on average, 6.8% higher when calculated with the NCRP Report 58 initial spectrum vs that calculated with TG-43U1 initial spectrum. The {sup 103}Pd spectra exhibit an average 6.2% decrease in the 22.9 keV line relative to the main peak when calculated with the TG-43U1 rather than the NNDC(2000) initial spectrum. The measured values from three different investigations are in much better agreement with the calculations using the NCRP Report 58 and NNDC(2000) initial spectra with average discrepancies of 0.9% and 1.7% for the {sup 125}I and {sup 103}Pd seeds, respectively. However, there are no differences in the calculated TG-43U1 brachytherapy parameters using either initial spectrum in both cases. Similarly, there were no differences outside the statistical uncertainties of 0.1% or 0.2%, in the average energy, air kerma/history, dose rate/history, and dose rate constant when calculated using either the full photon spectrum or the main-peaks-only spectrum. Conclusions: Our calculated dose rate constants based on using the calculated on-axis spectrum and a line or dual-point source model are in excellent agreement (0.5% on average) with the values of Chen and Nath, verifying the accuracy of their more approximate method of going from the spectrum to the dose rate constant. However, the dose rate constants based on full seed models differ by between +4.6% and -1.5% from those based on the line or dual-point source approximations. These results suggest that the main value of spectroscopic measurements is to verify full Monte Carlo models of the seeds by comparison to the calculated spectra.« less
Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.
Robinson, John D; Hall, David W; Wares, John P
2013-05-01
Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.
Lightning Simulation and Design Program (LSDP)
NASA Astrophysics Data System (ADS)
Smith, D. A.
This computer program simulates a user-defined lighting configuration. It has been developed as a tool to aid in the design of exterior lighting systems. Although this program is used primarily for perimeter security lighting design, it has potential use for any application where the light can be approximated by a point source. A data base of luminaire photometric information is maintained for use with this program. The user defines the surface area to be illuminated with a rectangular grid and specifies luminaire positions. Illumination values are calculated for regularly spaced points in that area and isolux contour plots are generated. The numerical and graphical output for a particular site mode are then available for analysis. The amount of time spent on point-to-point illumination computation with this progress is much less than that required for tedious hand calculations. The ease with which various parameters can be interactively modified with the progress also reduces the time and labor expended. Consequently, the feasibility of design ideas can be examined, modified, and retested more thoroughly, and overall design costs can be substantially lessened by using this progress as an adjunct to the design process.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-15
... Company; Turkey Point, Units 6 and 7; Combined License Application, Notice of Intent To Prepare an... application for a combined license (COL) to build Units 6 and 7 at its Turkey Point site, located in Miami... approximately 4.5 miles from the nearest boundary of the Turkey Point site; the site is approximately 25 miles...
Alonso Roldán, Virginia; Bossio, Luisina; Galván, David E
2015-01-01
In species showing distributions attached to particular features of the landscape or conspicuous signs, counts are commonly made by making focal observations where animals concentrate. However, to obtain density estimates for a given area, independent searching for signs and occupancy rates of suitable sites is needed. In both cases, it is important to estimate detection probability and other possible sources of variation to avoid confounding effects on measurements of abundance variation. Our objective was to assess possible bias and sources of variation in a two-step protocol in which random designs were applied to search for signs while continuously recording video cameras were used to perform abundance counts where animals are concentrated, using mara (Dolichotis patagonum) as a case study. The protocol was successfully applied to maras within the Península Valdés protected area, given that the protocol was logistically suitable, allowed warrens to be found, the associated adults to be counted, and the detection probability to be estimated. Variability was documented in both components of the two-step protocol. These sources of variation should be taken into account when applying this protocol. Warren detectability was approximately 80% with little variation. Factors related to false positive detection were more important than imperfect detection. The detectability for individuals was approximately 90% using the entire day of observations. The shortest sampling period with a similar detection capacity than a day was approximately 10 hours, and during this period, the visiting dynamic did not show trends. For individual mara, the detection capacity of the camera was not significantly different from the observer during fieldwork. The presence of the camera did not affect the visiting behavior of adults to the warren. Application of this protocol will allow monitoring of the near-threatened mara providing a minimum local population size and a baseline for measuring long-term trends.
NASA Technical Reports Server (NTRS)
Lotti, Simone; Natalucci, Lorenzo; Mori, Kaya; Baganoff, Frederick K.; Boggs, Steven E.; Christensen, Finn E.; Craig, William W.; Hailey, Charles J.; Harrison, Fiona A.; Hong, Jaesub;
2016-01-01
We report on the results of NuSTAR and XMM-Newton observations of the persistent X-ray source 1E1743.1-2843, located in the Galactic Center region. The source was observed between 2012 September and October by NuSTAR and XMM-Newton, providing almost simultaneous observations in the hard and soft X-ray bands. The high X-ray luminosity points to the presence of an accreting compact object. We analyze the possibilities of this accreting compact object being either a neutron star (NS) or a black hole, and conclude that the joint XMM-Newton and NuSTAR spectrum from 0.3 to 40 keV fits a blackbody spectrum with kT approximately 1.8 keV emitted from a hot spot or an equatorial strip on an NS surface. This spectrum is thermally Comptonized by electrons with kTe approximately 4.6 keV. Accepting this NS hypothesis, we probe the low-mass X-ray binary (LMXB) or high-mass X-ray binary (HMXB) nature of the source. While the lack of Type-I bursts can be explained in the LMXB scenario, the absence of pulsations in the 2 MHz-49 Hz frequency range, the lack of eclipses and of an IR companion, and the lack of a Kaline from neutral or moderately ionized iron strongly disfavor interpreting this source as a HMXB. We therefore conclude that 1E1743.1-2843 is most likely an NS-LMXB located beyond the Galactic Center. There is weak statistical evidence for a soft X-ray excess which may indicate thermal emission from an accretion disk. However, the disk normalization remains unconstrained due to the high hydrogen column density (N(sub H) approximately 1.6 x 10(exp 23) cm(exp -2)).
Gravity and gravity gradient changes caused by a point dislocation
NASA Astrophysics Data System (ADS)
Huang, Jian-Liang; Li, Hui; Li, Rui-Hao
1995-02-01
In this paper we studied gravitational potential, gravity and its gradient changes, which are caused by a point dislocation, and gave the concise mathematical deduction with definite physical implication in dealing with the singular integral at a seismic source. We also analysed the features of the fields of gravity and gravity gradient, gravity-vertical-displacement gradient. The conclusions are: (1) Gravity and gravity gradient changes are very small with the change of vertical position; (2) Gravity change is much greater than the gravity gradient change which is not so distinct; (3) The gravity change due to redistribution of mass accounts for 10 50 percent of the total gravity change caused by dislocation. The signs (positive or negative) of total gravity change and vertical displacement are opposite each other at the same point for strike slip and dip slip; (4) Gravity-vertical-displacement-gradient is not constant; it manifests a variety of patterns for different dislocation models; (5) Gravity-vertical-displacement-gradient is approximately equal to apparent gravity-vertical-displacement-gradient.
A Monte Carlo Application to Approximate the Integral from a to b of e Raised to the x Squared.
ERIC Educational Resources Information Center
Easterday, Kenneth; Smith, Tommy
1992-01-01
Proposes an alternative means of approximating the value of complex integrals, the Monte Carlo procedure. Incorporating a discrete approach and probability, an approximation is obtained from the ratio of computer-generated points falling under the curve to the number of points generated in a predetermined rectangle. (MDH)
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
The Third EGRET Catalog of High-Energy Gamma-Ray Sources
NASA Technical Reports Server (NTRS)
Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.;
1998-01-01
The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog (Thompson et al. 1995) and its supplement (Thompson et al. 1996), this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.
The Chandra Source Catalog 2.0: Estimating Source Fluxes
NASA Astrophysics Data System (ADS)
Primini, Francis Anthony; Allen, Christopher E.; Miller, Joseph; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The Second Chandra Source Catalog (CSC2.0) will provide information on approximately 316,000 point or compact extended x-ray sources, derived from over 10,000 ACIS and HRC-I imaging observations available in the public archive at the end of 2014. As in the previous catalog release (CSC1.1), fluxes for these sources will be determined separately from source detection, using a Bayesian formalism that accounts for background, spatial resolution effects, and contamination from nearby sources. However, the CSC2.0 procedure differs from that used in CSC1.1 in three important aspects. First, for sources in crowded regions in which photometric apertures overlap, fluxes are determined jointly, using an extension of the CSC1.1 algorithm, as discussed in Primini & Kashyap (2014ApJ...796…24P). Second, an MCMC procedure is used to estimate marginalized posterior probability distributions for source fluxes. Finally, for sources observed in multiple observations, a Bayesian Blocks algorithm (Scargle, et al. 2013ApJ...764..167S) is used to group observations into blocks of constant source flux.In this poster we present details of the CSC2.0 photometry algorithms and illustrate their performance in actual CSC2.0 datasets.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Chaotic scattering in an open vase-shaped cavity: Topological, numerical, and experimental results
NASA Astrophysics Data System (ADS)
Novick, Jaison Allen
We present a study of trajectories in a two-dimensional, open, vase-shaped cavity in the absence of forces The classical trajectories freely propagate between elastic collisions. Bound trajectories, regular scattering trajectories, and chaotic scattering trajectories are present in the vase. Most importantly, we find that classical trajectories passing through the vase's mouth escape without return. In our simulations, we propagate bursts of trajectories from point sources located along the vase walls. We record the time for escaping trajectories to pass through the vase's neck. Constructing a plot of escape time versus the initial launch angle for the chaotic trajectories reveals a vastly complicated recursive structure or a fractal. This fractal structure can be understood by a suitable coordinate transform. Reducing the dynamics to two dimensions reveals that the chaotic dynamics are organized by a homoclinic tangle, which is formed by the union of infinitely long, intersecting stable and unstable manifolds. This study is broken down into three major components. We first present a topological theory that extracts the essential topological information from a finite subset of the tangle and encodes this information in a set of symbolic dynamical equations. These equations can be used to predict a topologically forced minimal subset of the recursive structure seen in numerically computed escape time plots. We present three applications of the theory and compare these predictions to our simulations. The second component is a presentation of an experiment in which the vase was constructed from Teflon walls using an ultrasound transducer as a point source. We compare the escaping signal to a classical simulation and find agreement between the two. Finally, we present an approximate solution to the time independent Schrodinger Equation for escaping waves. We choose a set of points at which to evaluate the wave function and interpolate trajectories connecting the source point to each "detector point". We then construct the wave function directly from these classical trajectories using the two-dimensional WKB approximation. The wave function is Fourier Transformed using a Fast Fourier Transform algorithm resulting in a spectrum in which each peak corresponds to an interpolated trajectory. Our predictions are based on an imagined experiment that uses microwave propagation within an electromagnetic waveguide. Such an experiment exploits the fact that under suitable conditions both Maxwell's Equations and the Schrodinger Equation can be reduced to the Helmholtz Equation. Therefore, our predictions, while compared to the electromagnetic experiment, contain information about the quantum system. Identifying peaks in the transmission spectrum with chaotic trajectories will allow for an additional experimental verification of the intermediate recursive structure. Finally, we summarize our results and discuss possible extensions of this project.
Linear Power-Flow Models in Multiphase Distribution Networks: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Andrey; Dall'Anese, Emiliano
This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less
Centroid Position as a Function of Total Counts in a Windowed CMOS Image of a Point Source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, R E; Olivier, S; Riot, V
2010-05-27
We obtained 960,200 22-by-22-pixel windowed images of a pinhole spot using the Teledyne H2RG CMOS detector with un-cooled SIDECAR readout. We performed an analysis to determine the precision we might expect in the position error signals to a telescope's guider system. We find that, under non-optimized operating conditions, the error in the computed centroid is strongly dependent on the total counts in the point image only below a certain threshold, approximately 50,000 photo-electrons. The LSST guider camera specification currently requires a 0.04 arcsecond error at 10 Hertz. Given the performance measured here, this specification can be delivered with a singlemore » star at 14th to 18th magnitude, depending on the passband.« less
NASA Astrophysics Data System (ADS)
Nowak-Lovato, K.
2014-12-01
Seepage from enhanced oil recovery, carbon storage, and natural gas sites can emit trace gases such as carbon dioxide, methane, and hydrogen sulfide. Trace gas emission at these locations demonstrate unique light stable isotope signatures that provide information to enable source identification of the material. Light stable isotope detection through surface monitoring, offers the ability to distinguish between trace gases emitted from sources such as, biological (fertilizers and wastes), mineral (coal or seams), or liquid organic systems (oil and gas reservoirs). To make light stable isotope measurements, we employ the ultra-sensitive technique, frequency modulation spectroscopy (FMS). FMS is an absorption technique with sensitivity enhancements approximately 100-1000x more than standard absorption spectroscopy with the advantage of providing stable isotope signature information. We have developed an integrated in situ (point source) system that measures carbon dioxide, methane and hydrogen sulfide with isotopic resolution and enhanced sensitivity. The in situ instrument involves the continuous collection of air and records the stable isotope ratio for the gas being detected. We have included in-line flask collection points to obtain gas samples for validation of isotopic concentrations using our in-house isotope ratio mass spectroscopy (IRMS). We present calibration curves for each species addressed above to demonstrate the sensitivity and accuracy of the system. We also show field deployment data demonstrating the capabilities of the system in making live dynamic measurements from an active source.
Acoustic propagation in a thermally stratified atmosphere
NASA Technical Reports Server (NTRS)
Vanmoorhem, W. K.
1988-01-01
Acoustic propagation in an atmosphere with a specific form of a temperature profile has been investigated by analytical means. The temperature profile used is representative of an actual atmospheric profile and contains three free parameters. Both lapse and inversion cases have been considered. Although ray solutions have been considered, the primary emphasis has been on solutions of the acoustic wave equation with point source where the sound speed varies with height above the ground corresponding to the assumed temperature profile. The method used to obtain the solution of the wave equation is based on Hankel transformation of the wave equation, approximate solution of the transformed equation for wavelength small compared to the scale of the temperature (or sound speed) profile, and approximate or numerical inversion of the Hankel transformed solution. The solution displays the characteristics found in experimental data but extensive comparison between the models and experimental data has not been carried out.
Evidence from the Soudan 1 experiment for underground muons associated with Cygnus X-3
NASA Technical Reports Server (NTRS)
Ayres, D. S. E.
1986-01-01
The Soudan 1 experiment has yielded evidence for an average underground muon flux of approximately 7 x 10 to the minus 11th power/sq cm/s which points back to the X-ray binary Cygnus X-3, and which exhibits the 4.8 h periodicity observed for other radiation from this source. Underground muon events which seem to be associated with Cygnus X-3 also show evidence for longer time variability of the flux. Such underground muons cannot be explained by any conventional models of the propagation and interaction of cosmic rays.
1988-01-01
for hydrauine, MMH and UDMH are 4.78 x 10-6, 10.2 x 10Ś, and 3.19 x 10-6 aecŕ, respectively. Plots of the log(area) versus time were linear and...followed first-order kinetics except for hydrauine, for which a non- linear portion was observed in the first 6 to 8 hours. This portion of the decay...As a result, the prototype flow reactor can be represented to good approximation by a linear combination of point source solutions (Reference 19). The
2011-04-01
L1u. Assume that geodesic lines, generated by the eikonal equation corresponding to the function c (x) are regular, i.e. any two points in R3 can be...source x0 is located far from Ω, then similarly with (107) ∆l (x, x0) ≈ 0 in Ω. The function l (x, x0) satisfies the eikonal equation [38] |∇xl (x, x0...called “inverse kinematic problem” which aims to recover the function c (x) from the eikonal equation assuming that the function l (x, x0) is known for
Exact Harmonic Metric for a Uniformly Moving Schwarzschild Black Hole
NASA Astrophysics Data System (ADS)
He, Guan-Sheng; Lin, Wen-Bin
2014-02-01
The harmonic metric for Schwarzschild black hole with a uniform velocity is presented. In the limit of weak field and low velocity, this metric reduces to the post-Newtonian approximation for one moving point mass. As an application, we derive the dynamics of particle and photon in the weak-field limit for the moving Schwarzschild black hole with an arbitrary velocity. It is found that the relativistic motion of gravitational source can induce an additional centripetal force on the test particle, which may be comparable to or even larger than the conventional Newtonian gravitational force.
Pinto, U; Maheshwari, B L; Ollerton, R L
2013-06-01
The Hawkesbury-Nepean River (HNR) system in South-Eastern Australia is the main source of water supply for the Sydney Metropolitan area and is one of the more complex river systems due to the influence of urbanisation and other activities in the peri-urban landscape through which it flows. The long-term monitoring of river water quality is likely to suffer from data gaps due to funding cuts, changes in priority and related reasons. Nevertheless, we need to assess river health based on the available information. In this study, we demonstrated how the Factor Analysis (FA), Hierarchical Agglomerative Cluster Analysis (HACA) and Trend Analysis (TA) can be applied to evaluate long-term historic data sets. Six water quality parameters, viz., temperature, chlorophyll-a, dissolved oxygen, oxides of nitrogen, suspended solids and reactive silicates, measured at weekly intervals between 1985 and 2008 at 12 monitoring stations located along the 300 km length of the HNR system were evaluated to understand the human and natural influences on the river system in a peri-urban landscape. The application of FA extracted three latent factors which explained more than 70 % of the total variance of the data and related to the 'bio-geographical', 'natural' and 'nutrient pollutant' dimensions of the HNR system. The bio-geographical and nutrient pollution factors more likely related to the direct influence of changes and activities of peri-urban natures and accounted for approximately 50 % of variability in water quality. The application of HACA indicated two major clusters representing clean and polluted zones of the river. On the spatial scale, one cluster was represented by the upper and lower sections of the river (clean zone) and accounted for approximately 158 km of the river. The other cluster was represented by the middle section (polluted zone) with a length of approximately 98 km. Trend Analysis indicated how the point sources influence river water quality on spatio-temporal scales, taking into account the various effects of nutrient and other pollutant loads from sewerage effluents, agriculture and other point and non-point sources along the river and major tributaries of the HNR. Over the past 26 years, water temperature has significantly increased while suspended solids have significantly decreased (p < 0.05). The analysis of water quality data through FA, HACA and TA helped to characterise the key sections and cluster the key water quality variables of the HNR system. The insights gained from this study have the potential to improve the effectiveness of river health-monitoring programs in terms of cost, time and effort, particularly in a peri-urban context.
First principles calculation of thermo-mechanical properties of thoria using Quantum ESPRESSO
NASA Astrophysics Data System (ADS)
Malakkal, Linu; Szpunar, Barbara; Zuniga, Juan Carlos; Siripurapu, Ravi Kiran; Szpunar, Jerzy A.
2016-05-01
In this work, we have used Quantum ESPRESSO (QE), an open source first principles code, based on density-functional theory, plane waves, and pseudopotentials, along with quasi-harmonic approximation (QHA) to calculate the thermo-mechanical properties of thorium dioxide (ThO2). Using Python programming language, our group developed qe-nipy-advanced, an interface to QE, which can evaluate the structural and thermo-mechanical properties of materials. We predicted the phonon contribution to thermal conductivity (kL) using the Slack model. We performed the calculations within local density approximation (LDA) and generalized gradient approximation (GGA) with the recently proposed version for solids (PBEsol). We employed a Monkhorst-Pack 5 × 5 × 5 k-points mesh in reciprocal space with a plane wave cut-off energy of 150 Ry to obtain the convergence of the structure. We calculated the dynamical matrices of the lattice on a 4 × 4 × 4 mesh. We have predicted the heat capacity, thermal expansion and the phonon contribution to thermal conductivity, as a function of temperature up to 1400K, and compared them with the previous work and known experimental results.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2016-10-03
A novel accurate and useful approximation of the well-known Beckmann distribution is presented here, which is used to model generalized pointing errors in the context of free-space optical (FSO) communication systems. We derive an approximate closed-form probability density function (PDF) for the composite gamma-gamma (GG) atmospheric turbulence with the pointing error model using the proposed approximation of the Beckmann distribution, which is valid for most practical terrestrial FSO links. This approximation takes into account the effect of the beam width, different jitters for the elevation and the horizontal displacement and the simultaneous effect of nonzero boresight errors for each axis at the receiver plane. Additionally, the proposed approximation allows us to delimit two different FSO scenarios. The first of them is when atmospheric turbulence is the dominant effect in relation to generalized pointing errors, and the second one when generalized pointing error is the dominant effect in relation to atmospheric turbulence. The second FSO scenario has not been studied in-depth by the research community. Moreover, the accuracy of the method is measured both visually and quantitatively using curve-fitting metrics. Simulation results are further included to confirm the analytical results.
NASA Astrophysics Data System (ADS)
Cuesta-Lazaro, Carolina; Quera-Bofarull, Arnau; Reischke, Robert; Schäfer, Björn Malte
2018-06-01
When the gravitational lensing of the large-scale structure is calculated from a cosmological model a few assumptions enter: (i) one assumes that the photons follow unperturbed background geodesics, which is usually referred to as the Born approximation, (ii) the lenses move slowly, (iii) the source-redshift distribution is evaluated relative to the background quantities, and (iv) the lensing effect is linear in the gravitational potential. Even though these approximations are small individually they could sum up, especially since they include local effects such as the Sachs-Wolfe and peculiar motion, but also non-local ones like the Born approximation and the integrated Sachs-Wolfe effect. In this work, we will address all points mentioned and perturbatively calculate the effect on a tomographic cosmic shear power spectrum of each effect individually as well as all cross-correlations. Our findings show that each effect is at least 4-5 orders of magnitude below the leading order lensing signal. Finally, we sum up all effects to estimate the overall impact on parameter estimation by a future cosmological weak-lensing survey such as Euclid in a wcold dark matter cosmology with parametrization Ωm, σ8, ns, h, w0, and wa, using five tomographic bins. We consistently find a parameter bias of 10-5, which is therefore completely negligible for all practical purposes, confirming that other effects such as intrinsic alignments, magnification bias and uncertainties in the redshift distribution will be the dominant systematic source in future surveys.
Locating CVBEM collocation points for steady state heat transfer problems
Hromadka, T.V.
1985-01-01
The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.
NASA Astrophysics Data System (ADS)
Chu, Zhigang; Yang, Yang; He, Yansong
2015-05-01
Spherical Harmonics Beamforming (SHB) with solid spherical arrays has become a particularly attractive tool for doing acoustic sources identification in cabin environments. However, it presents some intrinsic limitations, specifically poor spatial resolution and severe sidelobe contaminations. This paper focuses on overcoming these limitations effectively by deconvolution. First and foremost, a new formulation is proposed, which expresses SHB's output as a convolution of the true source strength distribution and the point spread function (PSF) defined as SHB's response to a unit-strength point source. Additionally, the typical deconvolution methods initially suggested for planar arrays, deconvolution approach for the mapping of acoustic sources (DAMAS), nonnegative least-squares (NNLS), Richardson-Lucy (RL) and CLEAN, are adapted to SHB successfully, which are capable of giving rise to highly resolved and deblurred maps. Finally, the merits of the deconvolution methods are validated and the relationships of source strength and pressure contribution reconstructed by the deconvolution methods vs. focus distance are explored both with computer simulations and experimentally. Several interesting results have emerged from this study: (1) compared with SHB, DAMAS, NNLS, RL and CLEAN all can not only improve the spatial resolution dramatically but also reduce or even eliminate the sidelobes effectively, allowing clear and unambiguous identification of single source or incoherent sources. (2) The availability of RL for coherent sources is highest, then DAMAS and NNLS, and that of CLEAN is lowest due to its failure in suppressing sidelobes. (3) Whether or not the real distance from the source to the array center equals the assumed one that is referred to as focus distance, the previous two results hold. (4) The true source strength can be recovered by dividing the reconstructed one by a coefficient that is the square of the focus distance divided by the real distance from the source to the array center. (5) The reconstructed pressure contribution is almost not affected by the focus distance, always approximating to the true one. This study will be of great significance to the accurate localization and quantification of acoustic sources in cabin environments.
Neutral points of skylight polarization observed during the total eclipse on 11 August 1999.
Horváth, Gábor; Pomozi, István; Gál, József
2003-01-20
We report here on the observation of unpolarized (neutral) points in the sky during the total solar eclipse on 11 August 1999. Near the zenith a neutral point was observed at 450 nm at two different points of time during totality. Around this celestial point the distribution of the angle of polarization was heterogeneous: The electric field vectors on the one side were approximately perpendicular to those on the other side. At another moment of totality, near the zenith a local minimum of the degree of linear polarization occurred at 550 nm. Near the antisolar meridian, at a low elevation another two neutral points occurred at 450 nm at a certain moment during totality. Approximately at the position of these neutral points, at another moment of totality a local minimum of the degree of polarization occurred at 550 nm, whereas at 450 nm a neutral point was observed, around which the angle-of-polarization pattern was homogeneous: The electric field vectors were approximately horizontal on both sides of the neutral point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Courtois, C.; Compant La Fontaine, A.; Bazzoli, S.
2013-08-15
Results of an experiment to characterise a MeV Bremsstrahlung x-ray emission created by a short (<10 ps) pulse, high intensity (1.4 × 10{sup 19} W/cm{sup 2}) laser are presented. X-ray emission is characterized using several diagnostics; nuclear activation measurements, a calibrated hard x-ray spectrometer, and dosimeters. Results from the reconstructed x-ray energy spectra are consistent with numerical simulations using the PIC and Monte Carlo codes between 0.3 and 30 MeV. The intense Bremsstrahlung x-ray source is used to radiograph an image quality indicator (IQI) heavily filtered with thick tungsten absorbers. Observations suggest that internal features of the IQI can bemore » resolved up to an external areal density of 85 g/cm{sup 2}. The x-ray source size, inferred by the radiography of a thick resolution grid, is estimated to be approximately 400 μm (full width half maximum of the x-ray source Point Spread Function)« less
Methane flux from coastal salt marshes
NASA Technical Reports Server (NTRS)
Bartlett, K. B.; Harriss, R. C.; Sebacher, D. I.
1985-01-01
It is thought that biological methanogenesis in natural and agricultural wetlands and enteric fermentation in animals are the dominant sources of global tropospheric methane. It is pointed out that the anaerobic soils and sediments, where methanogenesis occurs, predominate in coastal marine wetlands. Coastal marine wetlands are generally believed to be approximately equal in area to freshwater wetlands. For this reason, coastal marine wetlands may be a globally significant source of atmospheric methane. The present investigation is concerned with the results of a study of direct measurements of methane fluxes to the atmosphere from salt marsh soils and of indirect determinations of fluxes from tidal creek waters. In addition, measurements of methane distributions in coastal marine wetland sediments and water are presented. The results of the investigation suggest that marine wetlands provide only a minor contribution to atmospheric methane on a global scale.
Distributed Seismic Moment Fault Model, Spectral Characteristics and Radiation Patterns
NASA Astrophysics Data System (ADS)
Shani-Kadmiel, Shahar; Tsesarsky, Michael; Gvirtzman, Zohar
2014-05-01
We implement a Distributed Seismic Moment (DSM) fault model, a physics-based representation of an earthquake source based on a skewed-Gaussian slip distribution over an elliptical rupture patch, for the purpose of forward modeling of seismic-wave propagation in 3-D heterogeneous medium. The elliptical rupture patch is described by 13 parameters: location (3), dimensions of the patch (2), patch orientation (1), focal mechanism (3), nucleation point (2), peak slip (1), rupture velocity (1). A node based second order finite difference approach is used to solve the seismic-wave equations in displacement formulation (WPP, Nilsson et al., 2007). Results of our DSM fault model are compared with three commonly used fault models: Point Source Model (PSM), Haskell's fault Model (HM), and HM with Radial (HMR) rupture propagation. Spectral features of the waveforms and radiation patterns from these four models are investigated. The DSM fault model best incorporates the simplicity and symmetry of the PSM with the directivity effects of the HMR while satisfying the physical requirements, i.e., smooth transition from peak slip at the nucleation point to zero at the rupture patch border. The implementation of the DSM in seismic-wave propagation forward models comes at negligible computational cost. Reference: Nilsson, S., Petersson, N. A., Sjogreen, B., and Kreiss, H.-O. (2007). Stable Difference Approximations for the Elastic Wave Equation in Second Order Formulation. SIAM Journal on Numerical Analysis, 45(5), 1902-1936.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samal, Pramoda Kumar; Jain, Pankaj; Saha, Rajib
We estimate cosmic microwave background (CMB) polarization and temperature power spectra using Wilkinson Microwave Anisotropy Probe (WMAP) 5 year foreground contaminated maps. The power spectrum is estimated by using a model-independent method, which does not utilize directly the diffuse foreground templates nor the detector noise model. The method essentially consists of two steps: (1) removal of diffuse foregrounds contamination by making linear combination of individual maps in harmonic space and (2) cross-correlation of foreground cleaned maps to minimize detector noise bias. For the temperature power spectrum we also estimate and subtract residual unresolved point source contamination in the cross-power spectrummore » using the point source model provided by the WMAP science team. Our TT, TE, and EE power spectra are in good agreement with the published results of the WMAP science team. We perform detailed numerical simulations to test for bias in our procedure. We find that the bias is small in almost all cases. A negative bias at low l in TT power spectrum has been pointed out in an earlier publication. We find that the bias-corrected quadrupole power (l(l + 1)C{sub l} /2{pi}) is 532 {mu}K{sup 2}, approximately 2.5 times the estimate (213.4 {mu}K{sup 2}) made by the WMAP team.« less
NASA Astrophysics Data System (ADS)
Pommé, S.
2009-06-01
An analytical model is presented to calculate the total detection efficiency of a well-type radiation detector for photons, electrons and positrons emitted from a radioactive source at an arbitrary position inside the well. The model is well suited to treat a typical set-up with a point source or cylindrical source and vial inside a NaI well detector, with or without lead shield surrounding it. It allows for fast absolute or relative total efficiency calibrations for a wide variety of geometrical configurations and also provides accurate input for the calculation of coincidence summing effects. Depending on its accuracy, it may even be applied in 4π-γ counting, a primary standardisation method for activity. Besides an accurate account of photon interactions, precautions are taken to simulate the special case of 511 keV annihilation quanta and to include realistic approximations for the range of (conversion) electrons and β -- and β +-particles.
Naser, Mohamed A.; Patterson, Michael S.
2010-01-01
Reconstruction algorithms are presented for a two-step solution of the bioluminescence tomography (BLT) problem. In the first step, a priori anatomical information provided by x-ray computed tomography or by other methods is used to solve the continuous wave (cw) diffuse optical tomography (DOT) problem. A Taylor series expansion approximates the light fluence rate dependence on the optical properties of each region where first and second order direct derivatives of the light fluence rate with respect to scattering and absorption coefficients are obtained and used for the reconstruction. In the second step, the reconstructed optical properties at different wavelengths are used to calculate the Green’s function of the system. Then an iterative minimization solution based on the L1 norm shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. This provides an efficient BLT reconstruction algorithm with the ability to determine relative source magnitudes and positions in the presence of noise. PMID:21258486
Minimizing the Diameter of a Network Using Shortcut Edges
NASA Astrophysics Data System (ADS)
Demaine, Erik D.; Zadimoghaddam, Morteza
We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.
First-order approximation error analysis of Risley-prism-based beam directing system.
Zhao, Yanyan; Yuan, Yan
2014-12-01
To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.
XMM-Newton Archival Study of the ULX Population in Nearby Galaxies
NASA Technical Reports Server (NTRS)
Winter, Lisa M.; Mushotzky, Richard; Reynolds, Christopher S.
2005-01-01
We have conducted an archival XMM-Newton study of the bright X-ray point sources in 32 nearby galaxies. From our list of approximately 100 point sources, we attempt to determine if there is a low-state counterpart to the Ultraluminous X-ray (ULX) population. Indeed, 16 sources in our sample match the criteria we set for a low-state ULX, namely, L(sub X) greater than 10(exp 38 ergs per second) and a spectrum best fit with an absorbed power law. Further, we find evidence for 26 high-state ULXs which are best fit by a combined blackbody and a power law. As in Galactic black hole systems, the spectral indices, GAMMA, of the low-state objects, as well a s the luminosities, tend to be lower than those of the high-state objects. The observed range of blackbody temperatures is 0.1-1 keV with the most luminous systems tending toward the lowest temperatures. We also find a class of object whose properties (luminosity, blackbody temperature, and power law slopes) are very similar to those of galactic stellar mass black holes. In addition, we find a subset of these objects that can be best fit by a Comptonized spectrum similar to that used for Galactic black holes in the very high state, when they are radiating near the Eddington limit.
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
NASA Astrophysics Data System (ADS)
Jones, K. R.; Arrowsmith, S.; Whitaker, R. W.
2012-12-01
The overall mission of the National Center for Nuclear Security (NCNS) Source Physics Experiment at the National Nuclear Security Site (SPE-N) near Las Vegas, Nevada is to improve upon and develop new physics based models for underground nuclear explosions using scaled, underground chemical explosions as proxies. To this end, we use the Rayleigh integral as an approximation to the Helmholz-Kirchoff integral, [Whitaker, 2007 and Arrowsmith et al., 2011], to model infrasound generation in the far-field. Infrasound generated by single-point explosive sources above ground can typically be treated as monopole point-sources. While the source is relatively simple, the research needed to model above ground point-sources is complicated by path effects related to the propagation of the acoustic signal and out of the scope of this study. In contrast, for explosions that occur below ground, including the SPE explosions, the source region is more complicated but the observation distances are much closer (< 5 km), thus greatly reducing the complication of path effects. In this case, elastic energy from the explosions radiates upward and spreads out, depending on depth, to a more distributed region at the surface. Due to this broad surface perturbation of the atmosphere we cannot model the source as a simple monopole point-source. Instead, we use the analogy of a piston mounted in a rigid, infinite baffle, where the surface area that moves as a result of the explosion is the piston and the surrounding region is the baffle. The area of the "piston" is determined by the depth and explosive yield of the event. In this study we look at data from SPE-N-2 and SPE-N-3. Both shots had an explosive yield of 1 ton at a depth of 45 m. We collected infrasound data with up to eight stations and 32 sensors within a 5 km radius of ground zero. To determine the area of the surface acceleration, we used data from twelve surface accelerometers installed within 100 m radially about ground zero. With the accelerometer data defining the vertical motion of the surface, we use the Rayleigh Integral Method, [Whitaker, 2007 and Arrowsmith et al., 2011], to generate a synthetic infrasound pulse to compare to the observed data. Because the phase across the "piston" is not necessarily uniform, constructive and destructive interference will change the shape of the acoustic pulse if observed directly above the source (on-axis) or perpendicular to the source (off-axis). Comparing the observed data to the synthetic data we note that the overall structure of the pulse agrees well and that the differences can be attributed to a number of possibilities, including the sensors used, topography, meteorological conditions, etc. One other potential source of error between the observed and calculated data is that we use a flat, symmetric source region for the "piston" where in reality the source region is not flat and not perfectly symmetric. A primary goal of this work is to better understand and model the relationships between surface area, depth, and yield of underground explosions.
Progress report on hot particle studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baum, J.W.; Kaurin, D.G.; Waligorski, M.
1992-02-01
NCRP Report 106 on the effects of hot particles on the skin of pigs, monkeys, and humans was critically reviewed and reassessed. The analysis of the data of Forbes and Mikhail on the effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model to predict both the threshold for acute ulceration and for ulcer diameter. In this model, a point dose of 27 Gy at a depth of 1.33 mm in tissue will cause an ulcer with a diameter determined by the radius to which this dosemore » extends. Application of the model to the Forbes and Mikhail data obtained with mixed fission product beta particles yielded a threshold'' (5% probability) of 6 {times} 10{sup 9} beta particles from a point source of high energy (2.25 MeV maximum) beta particles on skin. The above model was used to predict that approximately 1.2 {times} 10{sup 10} beta particles from Sr-Y-90 would produce similar effects, since few Sr-90 beta particles reach 1.33 mm depth. These emissions correspond to doses at 70-{mu}m depth in tissue of approximately 5.3 to 5.5 Gy averaged over 1 cm{sup 2}, respectively.« less
Progress report on hot particle studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baum, J.W.; Kaurin, D.G.; Waligorski, M.
1992-02-01
NCRP Report 106 on the effects of hot particles on the skin of pigs, monkeys, and humans was critically reviewed and reassessed. The analysis of the data of Forbes and Mikhail on the effects from activated UC{sub 2} particles, ranging in diameter from 144 {mu}m to 328 {mu}m, led to the formulation of a new model to predict both the threshold for acute ulceration and for ulcer diameter. In this model, a point dose of 27 Gy at a depth of 1.33 mm in tissue will cause an ulcer with a diameter determined by the radius to which this dosemore » extends. Application of the model to the Forbes and Mikhail data obtained with mixed fission product beta particles yielded a ``threshold`` (5% probability) of 6 {times} 10{sup 9} beta particles from a point source of high energy (2.25 MeV maximum) beta particles on skin. The above model was used to predict that approximately 1.2 {times} 10{sup 10} beta particles from Sr-Y-90 would produce similar effects, since few Sr-90 beta particles reach 1.33 mm depth. These emissions correspond to doses at 70-{mu}m depth in tissue of approximately 5.3 to 5.5 Gy averaged over 1 cm{sup 2}, respectively.« less
Saha, Mahua; Togo, Ayako; Mizukawa, Kaoruko; Murakami, Michio; Takada, Hideshige; Zakaria, Mohamad P; Chiem, Nguyen H; Tuyen, Bui Cach; Prudente, Maricar; Boonyatumanond, Ruchaya; Sarkar, Santosh Kumar; Bhattacharya, Badal; Mishra, Pravakar; Tana, Touch Seang
2009-02-01
We collected surface sediment samples from 174 locations in India, Indonesia, Malaysia, Thailand, Vietnam, Cambodia, Laos, and the Philippines and analyzed them for polycyclic aromatic hydrocarbons (PAHs) and hopanes. PAHs were widely distributed in the sediments, with comparatively higher concentrations in urban areas (Sigma PAHs: approximately 1000 to approximately 100,000 ng/g-dry) than in rural areas ( approximately 10 to approximately 100g-dry), indicating large sources of PAHs in urban areas. To distinguish petrogenic and pyrogenic sources of PAHs, we calculated the ratios of alkyl PAHs to parent PAHs: methylphenanthrenes to phenanthrene (MP/P), methylpyrenes+methylfluoranthenes to pyrene+fluoranthene (MPy/Py), and methylchrysenes+methylbenz[a]anthracenes to chrysene+benz[a]anthracene (MC/C). Analysis of source materials (crude oil, automobile exhaust, and coal and wood combustion products) gave thresholds of MP/P=0.4, MPy/Py=0.5, and MC/C=1.0 for exclusive combustion origin. All the combustion product samples had the ratios of alkyl PAHs to parent PAHs below these threshold values. Contributions of petrogenic and pyrogenic sources to the sedimentary PAHs were uneven among the homologs: the phenanthrene series had a greater petrogenic contribution, whereas the chrysene series had a greater pyrogenic contribution. All the Indian sediments showed a strong pyrogenic signature with MP/P approximately 0.5, MPy/Py approximately 0.1, and MC/C approximately 0.2, together with depletion of hopanes indicating intensive inputs of combustion products of coal and/or wood, probably due to the heavy dependence on these fuels as sources of energy. In contrast, sedimentary PAHs from all other tropical Asian cities were abundant in alkylated PAHs with MP/P approximately 1-4, MPy/Py approximately 0.3-1, and MC/C approximately 0.2-1.0, suggesting a ubiquitous input of petrogenic PAHs. Petrogenic contributions to PAH homologs varied among the countries: largest in Malaysia whereas inferior in Laos. The higher abundance of alkylated PAHs together with constant hopane profiles suggests widespread inputs of automobile-derived petrogenic PAHs to Asian waters.
Publications - GMC 310 | Alaska Division of Geological & Geophysical
approximations of core (4,309.5'-4,409') from the BP Exploration (Alaska) Inc. Milne Point G-1 well Authors UCS approximations of core (4,309.5'-4,409') from the BP Exploration (Alaska) Inc. Milne Point G-1
Using the WSA Model to Test the Parker Spiral Approximation for SEP Event Magnetic Connections
NASA Astrophysics Data System (ADS)
Kahler, S. W.; Arge, C. N.; Smith, D. A.
2016-08-01
In studies of solar energetic (E > 10 MeV) particle (SEP) events the Parker spiral (PS) field approximation, based only on the measured 1 AU solar wind (SW) speed Vsw, is nearly always used to determine the coronal or photospheric source locations of the 1 AU magnetic fields. There is no objective way to validate that approximation, but here we seek guidelines for optimizing its application. We first review recent SEP studies showing the extensive use of the PS approximation with various assumptions about coronal and photospheric source fields. We then run the Wang-Sheeley-Arge (WSA) model over selected Carrington rotations (CRs) to track both the photospheric and 5 R_{⊙} source locations of the forecasted 1 AU SW, allowing us to compare those WSA sources with the PS sources inferred from the WSA Vsw forecast. We compile statistics of the longitude differences (WSA-PS) for all the CRs and discuss the limitations of using the WSA model to validate the PS approximation. Over nearly all of each CR the PS and WSA source longitudes agree to within several degrees. The agreement is poor only in the slow-fast SW interaction regions characterized by high-speed events (HSEs), where the longitude differences can reach several tens of degrees. This result implies that SEP studies should limit use of the PS approximation around HSEs and use magnetic field polarities as an additional check of solar source connections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sweezy, Jeremy Ed
A photon next-event fluence estimator at a point has been implemented in the Monte Carlo Application Toolkit (MCATK). The next-event estimator provides an expected value estimator for the flux at a point due to all source and collision events. An advantage of the next-event estimator over track-length estimators, which are normally employed in MCATK, is that flux estimates can be made in locations that have no random walk particle tracks. The next-event estimator allows users to calculate radiographs and estimate response for detectors outside of the modeled geometry. The next-event estimator is not yet accessable through the MCATK FlatAPI formore » C and Fortran. The next-event estimator in MCATK has been tested against MCNP6 using 5 suites of test problems. No issues were found in the MCATK implementation. One issue was found in the exclusion radius approximation in MCNP6. The theory, implementation, and testing are described in this document.« less
The pulsar planet production process
NASA Technical Reports Server (NTRS)
Phinney, E. S.; Hansen, B. M. S.
1993-01-01
Most plausible scenarios for the formation of planets around pulsars end with a disk of gas around the pulsar. The supplicant author then points to the solar system to bolster faith in the miraculous transfiguration of gas into planets. We here investigate this process of transfiguration. We derive analytic sequences of quasi-static disks which give good approximations to exact solutions of the disk diffusion equation with realistic opacity tables. These allow quick and efficient surveys of parameter space. We discuss the outward transfer of mass in accretion disks and the resulting timescale constraints, the effects of illumination by the central source on the disk and dust within it, and the effects of the widely different elemental compositions of the disks in the various scenarios, and their extensions to globular clusters. We point out where significant uncertainties exist in the appropriate grain opacities, and in the effect of illumination and winds from the neutron star.
Scattering of focused ultrasonic beams by cavities in a solid half-space.
Rahni, Ehsan Kabiri; Hajzargarbashi, Talieh; Kundu, Tribikram
2012-08-01
The ultrasonic field generated by a point focused acoustic lens placed in a fluid medium adjacent to a solid half-space, containing one or more spherical cavities, is modeled. The semi-analytical distributed point source method (DPSM) is followed for the modeling. This technique properly takes into account the interaction effect between the cavities placed in the focused ultrasonic field, fluid-solid interface and the lens surface. The approximate analytical solution that is available in the literature for the single cavity geometry is very restrictive and cannot handle multiple cavity problems. Finite element solutions for such problems are also prohibitively time consuming at high frequencies. Solution of this problem is necessary to predict when two cavities placed in close proximity inside a solid can be distinguished by an acoustic lens placed outside the solid medium and when such distinction is not possible.
High-speed spatial scanning pyrometer
NASA Technical Reports Server (NTRS)
Cezairliyan, A.; Chang, R. F.; Foley, G. M.; Miller, A. P.
1993-01-01
A high-speed spatial scanning pyrometer has been designed and developed to measure spectral radiance temperatures at multiple target points along the length of a rapidly heating/cooling specimen in dynamic thermophysical experiments at high temperatures (above about 1800 K). The design, which is based on a self-scanning linear silicon array containing 1024 elements, enables the pyrometer to measure spectral radiance temperatures (nominally at 650 nm) at 1024 equally spaced points along a 25-mm target length. The elements of the array are sampled consecutively every 1 microsec, thereby permitting one cycle of measurements to be completed in approximately 1 msec. Procedures for calibration and temperature measurement as well as the characteristics and performance of the pyrometer are described. The details of sources and estimated magnitudes of possible errors are given. An example of measurements of radiance temperatures along the length of a tungsten rod, during its cooling following rapid resistive pulse heating, is presented.
Elementary Theoretical Forms for the Spatial Power Spectrum of Earth's Crustal Magnetic Field
NASA Technical Reports Server (NTRS)
Voorhies, C.
1998-01-01
The magnetic field produced by magnetization in Earth's crust and lithosphere can be distinguished from the field produced by electric currents in Earth's core because the spatial magnetic power spectrum of the crustal field differs from that of the core field. Theoretical forms for the spectrum of the crustal field are derived by treating each magnetic domain in the crust as the point source of a dipole field. The geologic null-hypothesis that such moments are uncorrelated is used to obtain the magnetic spectrum expected from a randomly magnetized, or unstructured, spherical crust of negligible thickness. This simplest spectral form is modified to allow for uniform crustal thickness, ellipsoidality, and the polarization of domains by an periodically reversing, geocentric axial dipole field from Earth's core. Such spectra are intended to describe the background crustal field. Magnetic anomalies due to correlated magnetization within coherent geologic structures may well be superimposed upon this background; yet representing each such anomaly with a single point dipole may lead to similar spectral forms. Results from attempts to fit these forms to observational spectra, determined via spherical harmonic analysis of MAGSAT data, are summarized in terms of amplitude, source depth, and misfit. Each theoretical spectrum reduces to a source factor multiplied by the usual exponential function of spherical harmonic degree n due to geometric attenuation with attitude above the source layer. The source factors always vary with n and are approximately proportional to n(exp 3) for degrees 12 through 120. The theoretical spectra are therefore not directly proportional to an exponential function of spherical harmonic degree n. There is no radius at which these spectra are flat, level, or otherwise independent of n.
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Webb, Jay C.
1994-01-01
In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The effectiveness of the correction factor in providing improvements to the computed solution is demonstrated in this paper.
Approximate supernova remnant dynamics with cosmic ray production
NASA Technical Reports Server (NTRS)
Voelk, H. J.; Drury, L. O.; Dorfi, E. A.
1985-01-01
Supernova explosions are the most violent and energetic events in the galaxy and have long been considered probably sources of Cosmic Rays. Recent shock acceleration models treating the Cosmic Rays (CR's) as test particles nb a prescribed Supernova Remnant (SNR) evolution, indeed indicate an approximate power law momentum distribution f sub source (p) approximation p(-a) for the particles ultimately injected into the Interstellar Medium (ISM). This spectrum extends almost to the momentum p = 1 million GeV/c, where the break in the observed spectrum occurs. The calculated power law index approximately less than 4.2 agrees with that inferred for the galactic CR sources. The absolute CR intensity can however not be well determined in such a test particle approximation.
pH of Aerosols in a Polluted Atmosphere: Source Contributions to Highly Acidic Aerosol.
Shi, Guoliang; Xu, Jiao; Peng, Xing; Xiao, Zhimei; Chen, Kui; Tian, Yingze; Guan, Xinbei; Feng, Yinchang; Yu, Haofei; Nenes, Athanasios; Russell, Armistead G
2017-04-18
Acidity (pH) plays a key role in the physical and chemical behavior of PM 2.5 . However, understanding of how specific PM sources impact aerosol pH is rarely considered. Performing source apportionment of PM 2.5 allows a unique link of sources pH of aerosol from the polluted city. Hourly water-soluble (WS) ions of PM 2.5 were measured online from December 25th, 2014 to June 19th, 2015 in a northern city in China. Five sources were resolved including secondary nitrate (41%), secondary sulfate (26%), coal combustion (14%), mineral dust (11%), and vehicle exhaust (9%). The influence of source contributions to pH was estimated by ISORROPIA-II. The lowest aerosol pH levels were found at low WS-ion levels and then increased with increasing total ion levels, until high ion levels occur, at which point the aerosol becomes more acidic as both sulfate and nitrate increase. Ammonium levels increased nearly linearly with sulfate and nitrate until approximately 20 μg m -3 , supporting that the ammonium in the aerosol was more limited by thermodynamics than source limitations, and aerosol pH responded more to the contributions of sources such as dust than levels of sulfate. Commonly used pH indicator ratios were not indicative of the pH estimated using the thermodynamic model.
An X-Ray Counterpart of HESS J1427-608 Discovered with Suzaku
NASA Astrophysics Data System (ADS)
Fujinaga, Takahisa; Mori, Koji; Bamba, Aya; Kimura, Shoichi; Dotani, Tadayasu; Ozaki, Masanobu; Matsuta, Keiko; Pülhofer, Gerd; Uchiyama, Hideki; Hiraga, Junko S.; Matsumoto, Hironori; Terada, Yukikatsu
2013-06-01
We report on the discovery of an X-ray counterpart of the unidentified very high-energy gamma-ray source HESS J1427-608. In the sky field coincident with HESS J1427-608, an extended source was found in the 2-8 keV band, and was designated as Suzaku J1427-6051. Its X-ray radial profile has an extension of σ = 0.'9 ± 0.'1 if approximated by a Gaussian. The spectrum was well fitted by an absorbed power-law with NH = (1.1 ± 0.3) × 1023 cm-2, Γ = 3.1+0.6-0.5, and the unabsorbed flux FX = (9+4-2) × 10-13 erg s-1 cm-2 in the 2-10 keV band. Using XMM-Newton archive data, we found seven point sources in the Suzaku source region. However, because their total flux and absorbing column densities are more than an order of magnitude lower than those of Suzaku J1427-6051, we consider that they are unrelated to the Suzaku source. Thus, Suzaku J1427-6051 is considered to be a truly diffuse source and an X-ray counterpart of HESS J1427-608. The possible nature of HESS J1427-608 is discussed based on the observational properties.
NASA Astrophysics Data System (ADS)
Royston, Thomas J.; Yazicioglu, Yigit; Loth, Francis
2003-02-01
The response at the surface of an isotropic viscoelastic medium to buried fundamental acoustic sources is studied theoretically, computationally and experimentally. Finite and infinitesimal monopole and dipole sources within the low audible frequency range (40-400 Hz) are considered. Analytical and numerical integral solutions that account for compression, shear and surface wave response to the buried sources are formulated and compared with numerical finite element simulations and experimental studies on finite dimension phantom models. It is found that at low audible frequencies, compression and shear wave propagation from point sources can both be significant, with shear wave effects becoming less significant as frequency increases. Additionally, it is shown that simple closed-form analytical approximations based on an infinite medium model agree well with numerically obtained ``exact'' half-space solutions for the frequency range and material of interest in this study. The focus here is on developing a better understanding of how biological soft tissue affects the transmission of vibro-acoustic energy from biological acoustic sources below the skin surface, whose typical spectral content is in the low audible frequency range. Examples include sound radiated from pulmonary, gastro-intestinal and cardiovascular system functions, such as breath sounds, bowel sounds and vascular bruits, respectively.
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
Boiling point measurement of a small amount of brake fluid by thermocouple and its application.
Mogami, Kazunari
2002-09-01
This study describes a new method for measuring the boiling point of a small amount of brake fluid using a thermocouple and a pear shaped flask. The boiling point of brake fluid was directly measured with an accuracy that was within approximately 3 C of that determined by the Japanese Industrial Standards method, even though the sample volume was only a few milliliters. The method was applied to measure the boiling points of brake fluid samples from automobiles. It was clear that the boiling points of brake fluid from some automobiles dropped to approximately 140 C from about 230 C, and that one of the samples from the wheel cylinder was approximately 45 C lower than brake fluid from the reserve tank. It is essential to take samples from the wheel cylinder, as this is most easily subjected to heating.
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.
Gravity-height correlations for unrest at calderas
NASA Astrophysics Data System (ADS)
Berrino, G.; Rymer, H.; Brown, G. C.; Corrado, G.
1992-11-01
Calderas represent the sites of the world's most serious volcanic hazards. Although eruptions are not frequent at such structures on the scale of human lifetimes, there are nevertheless often physical changes at calderas that are measurable over periods of years or decades. Such calderas are said to be in a state of unrest, and it is by studying the nature of this unrest that we may begin to understand the dynamics of eruption precursors. Here we review combined gravity and elevation data from several restless calderas, and present new data on their characteristic signatures during periods of inflation and deflation. We find that unless the Bouguer gravity anomaly at a caldera is extremely small, the free-air gradient used to correct gravity data for observed elevation changes must be the measured or calculated gradient, and not the theoretical gradient, use of which may introduce significant errors. In general, there are two models that fit most of the available data. The first involves a Mogi-type point source, and the second is a Bouguer-type infinite horizontal plane source. The density of the deforming material (usually a magma chamber) is calculated from the gravity and ground deformation data, and the best fitting model is, to a first approximation, the one producing the most realistic density. No realistic density is obtained where there are real density changes, or where the data do not fit the point source or slab model. We find that a point source model fits most of the available data, and that most data are for periods of caldera inflation. The limited examples of deflation from large silicic calderas indicate that the amount of mass loss, or magma drainage, is usually much less than the mass gain during the preceding magma intrusion. In contrast, deflationary events at basaltic calderas formed in extensional tectonic environments are associated with more significant mass loss as magma is injected into the associated fissure swarms.
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Acquaviva, V.; Ade, P. A. R.; Aguirre, P.; Amiri, M.; Appel, J. W.; Barrientos, L. F.; Bassistelli, E. S.; Bond, J. R.; Brown, B.;
2010-01-01
We present a measurement of the angular power spectrum of the cosmic microwave background (CMB) radiation observed at 148 GHz. The measurement uses maps with 1.4' angular resolution made with data from the Atacama Cosmology Telescope (ACT). The observations cover 228 deg(sup 2) of the southern sky, in a 4 deg. 2-wide strip centered on declination 53 deg. South. The CMB at arc minute angular scales is particularly sensitive to the Silk damping scale, to the Sunyaev-Zel'dovich (SZ) effect from galaxy dusters, and to emission by radio sources and dusty galaxies. After masking the 108 brightest point sources in our maps, we estimate the power spectrum between 600 less than l less than 8000 using the adaptive multi-taper method to minimize spectral leakage and maximize use of the full data set. Our absolute calibration is based on observations of Uranus. To verify the calibration and test the fidelity of our map at large angular scales, we cross-correlate the ACT map to the WMAP map and recover the WMAP power spectrum from 250 less than l less than 1150. The power beyond the Silk damping tail of the CMB (l approximately 5000) is consistent with models of the emission from point sources. We quantify the contribution of SZ clusters to the power spectrum by fitting to a model normalized to sigma 8 = 0.8. We constrain the model's amplitude A(sub sz) less than 1.63 (95% CL). If interpreted as a measurement of as, this implies sigma (sup SZ) (sub 8) less than 0.86 (95% CL) given our SZ model. A fit of ACT and WMAP five-year data jointly to a 6-parameter ACDM model plus point sources and the SZ effect is consistent with these results.
Evaluating Air-Quality Models: Review and Outlook.
NASA Astrophysics Data System (ADS)
Weil, J. C.; Sykes, R. I.; Venkatram, A.
1992-10-01
Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
Schneider, S; Meyer, C; Yamamoto, S; Solle, D
2009-08-01
Starting from 1 January 2007, electronic locking devices based on proof-of-age (via electronic cash cards or a European driving licence) were installed in approximately 500,000 vending machines across Germany to restrict the purchase of cigarettes to those over the age of 16. To examine changes in the number of tobacco vending machines before and after the introduction of these new measures. The total number of commercial tobacco sources in 2 selected districts (70,000 inhabitants) in Cologne were recorded and mapped. This major German city was the ideal setting for this study as investigators were able to use existing sociogeographical data from the area. A complete inventory was compiled in autumn 2005 and 2007. A total of 780 students aged 12 to 15 were also interviewed in the study areas. The main outcome measures were quantities and locations of commercial tobacco sources. Between 2005 and 2007 the total number of tobacco sources decreased from 315 to 277 within the study area. Although the most obvious reduction was detected in the number of outdoor vending machines (-48%), the number of indoor vending machines also decreased by 8%. Adolescents changed from vending machines to other sources for cigarettes, particularly kiosks or friends (+31% points usage rate, p<0.001; +35% points usage rate, p<0.001, respectively). Although the number of tobacco vending machines decreased, this has not had a significant impact on cigarette acquisition by underage smokers as they were able to circumvent this new security measure in several different ways.
New solutions with accelerated expansion in string theory
Dodelson, Matthew; Dong, Xi; Silverstein, Eva; ...
2014-12-05
We present concrete solutions with accelerated expansion in string theory, requiring a small, tractable list of stress energy sources. We explain how this construction (and others in progress) evades previous no go theorems for simple accelerating solutions. Our solutions respect an approximate scaling symmetry and realize discrete sequences of values for the equation of state, including one with an accumulation point at w = –1 and another accumulating near w = –1/3 from below. In another class of models, a density of defects generates scaling solutions with accelerated expansion. Here, we briefly discuss potential applications to dark energy phenomenology, andmore » to holography for cosmology.« less
Scattering of point particles by black holes: Gravitational radiation
NASA Astrophysics Data System (ADS)
Hopper, Seth; Cardoso, Vitor
2018-02-01
Gravitational waves can teach us not only about sources and the environment where they were generated, but also about the gravitational interaction itself. Here we study the features of gravitational radiation produced during the scattering of a pointlike mass by a black hole. Our results are exact (to numerical error) at any order in a velocity expansion, and are compared against various approximations. At large impact parameter and relatively small velocities our results agree to within percent level with various post-Newtonian and weak-field results. Further, we find good agreement with scaling predictions in the weak-field/high-energy regime. Lastly, we achieve striking agreement with zero-frequency estimates.
Infrared imaging spectroscopy of the Galactic center - Distribution and motions of the ionized gas
NASA Technical Reports Server (NTRS)
Herbst, T. M.; Beckwith, S. V. W.; Forrest, W. J.; Pipher, J. L.
1993-01-01
High spatial spectral resolution IR images of the Galactic center in the Br-gamma recombination line of hydrogen were taken. A coherent filament of gas extending from north of IRS 1, curving around IRS 16/Sgr A complex, and continuing to the southwest, is seen. Nine stellar sources have associated Br-gamma emission. The total Br-gamma line flux in the filament is approximately 3 x 10 exp -15 W/sq m. The distribution and kinematics of the northern arm suggest orbital motion; the observations are accordingly fit with elliptical orbits in the field of a central point of mass.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
Many-body perturbation theory using the density-functional concept: beyond the GW approximation.
Bruneval, Fabien; Sottile, Francesco; Olevano, Valerio; Del Sole, Rodolfo; Reining, Lucia
2005-05-13
We propose an alternative formulation of many-body perturbation theory that uses the density-functional concept. Instead of the usual four-point integral equation for the polarizability, we obtain a two-point one, which leads to excellent optical absorption and energy-loss spectra. The corresponding three-point vertex function and self-energy are then simply calculated via an integration, for any level of approximation. Moreover, we show the direct impact of this formulation on the time-dependent density-functional theory. Numerical results for the band gap of bulk silicon and solid argon illustrate corrections beyond the GW approximation for the self-energy.
Roots of polynomials by ratio of successive derivatives
NASA Technical Reports Server (NTRS)
Crouse, J. E.; Putt, C. W.
1972-01-01
An order of magnitude study of the ratios of successive polynomial derivatives yields information about the number of roots at an approached root point and the approximate location of a root point from a nearby point. The location approximation improves as a root is approached, so a powerful convergence procedure becomes available. These principles are developed into a computer program which finds the roots of polynomials with real number coefficients.
NASA Astrophysics Data System (ADS)
Benavente, Roberto; Cummins, Phil; Dettmer, Jan
2016-04-01
Rapid estimation of the spatial and temporal rupture characteristics of large megathrust earthquakes by finite fault inversion is important for disaster mitigation. For example, estimates of the spatio-temporal evolution of rupture can be used to evaluate population exposure to tsunami waves and ground shaking soon after the event by providing more accurate predictions than possible with point source approximations. In addition, rapid inversion results can reveal seismic source complexity to guide additional, more detailed subsequent studies. This work develops a method to rapidly estimate the slip distribution of megathrust events while reducing subjective parameter choices by automation. The method is simple yet robust and we show that it provides excellent preliminary rupture models as soon as 30 minutes for three great earthquakes in the South-American subduction zone. This may slightly change for other regions depending on seismic station coverage but method can be applied to any subduction region. The inversion is based on W-phase data since it is rapidly and widely available and of low amplitude which avoids clipping at close stations for large events. In addition, prior knowledge of the slab geometry (e.g. SLAB 1.0) is applied and rapid W-phase point source information (time delay and centroid location) is used to constrain the fault geometry and extent. Since the linearization by multiple time window (MTW) parametrization requires regularization, objective smoothing is achieved by the discrepancy principle in two fully automated steps. First, the residuals are estimated assuming unknown noise levels, and second, seeking a subsequent solution which fits the data to noise level. The MTW scheme is applied with positivity constraints and a solution is obtained by an efficient non-negative least squares solver. Systematic application of the algorithm to the Maule (2010), Iquique (2014) and Illapel (2015) events illustrates that rapid finite fault inversion with teleseismic data is feasible and provides meaningful results. The results for the three events show excellent data fits and are consistent with other solutions showing most of the slip occurring close to the trench for the Maule an Illapel events and some deeper slip for the Iquique event. Importantly, the Illapel source model predicts tsunami waveforms of close agreement with observed waveforms. Finally, we develop a new Bayesian approach to approximate uncertainties as part of the rapid inversion scheme with positivity constraints. Uncertainties are estimated by approximating the posterior distribution as a multivariate log-normal distribution. While solving for the posterior adds some additional computational cost, we illustrate that uncertainty estimation is important for meaningful interpretation of finite fault models.
The very soft X-ray emission of X-ray-faint early-type galaxies
NASA Technical Reports Server (NTRS)
Pellegrini, S.; Fabbiano, G.
1994-01-01
A recent reanaylsis of Einstein data, and new ROSAT observations, have revealed the presence of at least two components in the X-ray spectra of X-ray faint early-type galaxies: a relatively hard component (kT greater than 1.5 keV), and a very soft component (kT approximately 0.2-0.3 keV). In this paper we address the problem of the nature of the very soft component and whether it can be due to a hot interstellar medium (ISM), or is most likely originated by the collective emission of very soft stellar sources. To this purpose, hydrodynamical evolutionary sequences for the secular behavior of gas flows in ellipticals have been performed, varying the Type Ia supernovae rate of explosion, and the dark matter amount and distribution. The results are compared with the observational X-ray data: the average Einstein spectrum for six X-ray faint early-type galaxies (among which are NGC 4365 and NGC 4697), and the spectrum obtained by the ROSAT pointed observation of NGC 4365. The very soft component could be entirely explained with a hot ISM only in galaxies such as NGC 4697, i.e., when the depth of the potential well-on which the average ISM temperature strongly depends-is quite shallow; in NGC 4365 a diffuse hot ISM would have a temperature larger than that of the very soft component, because of the deeper potential well. So, in NGC 4365 the softest contribution to the X-ray emission comes certainly from stellar sources. As stellar soft X-ray emitters, we consider late-type stellar coronae, supersoft sources such as those discovered by ROSAT in the Magellanic Clouds and M31, and RS CVn systems. All these candidates can be substantial contributors to the very soft emission, though none of them, taken separately, plausibly accounts entirely for its properties. We finally present a model for the X-ray emission of NGC 4365, to reproduce in detail the results of the ROSAT pointed observation, including the Position Sensitive Proportional Counter (PSPC) spectrum and radial surface brightness distribution. The present data may suggest that the X-ray surface brightness is more extended than the optical profile. In this case, a straightforward explanation in terms of stellar sources could not be satisfactory. The available data can be better explained with three different contributions: a very soft component of stellar origin, a hard component from X-ray binaries, and an approximately 0.6 keV hot ISM. The latter can explain the extended X-ray surface brightness profile, if the galaxy has a dark-to-luminous mass ratio of 9, with the dark matter very broadly distributed, and a SN Ia explosive rate of approximately 0.6 the Tammann rate.
NASA Astrophysics Data System (ADS)
Ibey, Bennett; Subramanian, Hariharan; Ericson, Nance; Xu, Weijian; Wilson, Mark; Cote, Gerard L.
2005-03-01
A blood perfusion and oxygenation sensor has been developed for in situ monitoring of transplanted organs. In processing in situ data, motion artifacts due to increased perfusion can create invalid oxygenation saturation values. In order to remove the unwanted artifacts from the pulsatile signal, adaptive filtering was employed using a third wavelength source centered at 810nm as a reference signal. The 810 nm source resides approximately at the isosbestic point in the hemoglobin absorption curve where the absorbance of light is nearly equal for oxygenated and deoxygenated hemoglobin. Using an autocorrelation based algorithm oxygenation saturation values can be obtained without the need for large sampling data sets allowing for near real-time processing. This technique has been shown to be more reliable than traditional techniques and proven to adequately improve the measurement of oxygenation values in varying perfusion states.
Water security-National and global issues
Tindall, James A.; Campbell, Andrew A.
2010-01-01
Potable or clean freshwater availability is crucial to life and economic, environmental, and social systems. The amount of freshwater is finite and makes up approximately 2.5 percent of all water on the Earth. Freshwater supplies are small and randomly distributed, so water resources can become points of conflict. Freshwater availability depends upon precipitation patterns, changing climate, and whether the source of consumed water comes directly from desalination, precipitation, or surface and (or) groundwater. At local to national levels, difficulties in securing potable water sources increase with growing populations and economies. Available water improves living standards and drives urbanization, which increases average water consumption per capita. Commonly, disruptions in sustainable supplies and distribution of potable water and conflicts over water resources become major security issues for Government officials. Disruptions are often influenced by land use, human population, use patterns, technological advances, environmental impacts, management processes and decisions, transnational boundaries, and so forth.
NASA Technical Reports Server (NTRS)
Dewitt, K. J.; Baliga, G.
1982-01-01
A numerical simulation was developed to investigate the one dimensional heat transfer occurring in a system composed of a layered aircraft blade having an ice deposit on its surface. The finite difference representation of the heat conduction equations was done using the Crank-Nicolson implicit finite difference formulation. The simulation considers uniform or time dependent heat sources, from heaters which can be either point sources or of finite thickness. For the ice water phase change, a numerical method which approximates the latent heat effect by a large heat capacity over a small temperature interval was applied. The simulation describes the temperature profiles within the various layers of the de-icer pad, as well as the movement of the ice water interface. The simulation could also be used to predict the one dimensional temperature profiles in any composite slab having different boundary conditions.
NASA Technical Reports Server (NTRS)
Falomo, Renato; Pesce, Joseph E.; Treves, Aldo
1995-01-01
We report on direct, subarcsecond resolution imaging of the nebulosity and spectroscopy of galaxies in the field of the BL Lacertae object PKS 0548-322. Surface photometry of the nebulosity is used to derive the properties of the host galaxy (M(sub V) = -23.4), which exhibits signs of interaction with a close companion galaxy at approximately 25 kpc. The radial brightness profile of the nebulosity is well fitted by the contribution of a bulge (r(exp 1/4)) plus a point source and a small internal disk. An analysis of the galaxies in the field shows that the source is located in a rich cluster of galaxies. Spectra of five galaxies in the field indicate that they are at the same redshift as the BL Lac object, thus supporting the imaging result of a surrounding cluster associated with the BL Lac. This cluster is most likely Abell S0549.
SEARCHES FOR HIGH-ENERGY NEUTRINO EMISSION IN THE GALAXY WITH THE COMBINED ICECUBE-AMANDA DETECTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbasi, R.; Ahlers, M.; Andeen, K.
2013-01-20
We report on searches for neutrino sources at energies above 200 GeV in the Northern sky of the Galactic plane, using the data collected by the South Pole neutrino telescope, IceCube, and AMANDA. The Galactic region considered in this work includes the local arm toward the Cygnus region and our closest approach to the Perseus Arm. The searches are based on the data collected between 2007 and 2009. During this time AMANDA was an integrated part of IceCube, which was still under construction and operated with 22 strings (2007-2008) and 40 strings (2008-2009) of optical modules deployed in the ice.more » By combining the advantages of the larger IceCube detector with the lower energy threshold of the more compact AMANDA detector, we obtain an improved sensitivity at energies below {approx}10 TeV with respect to previous searches. The analyses presented here are a scan for point sources within the Galactic plane, a search optimized for multiple and extended sources in the Cygnus region, which might be below the sensitivity of the point source scan, and studies of seven pre-selected neutrino source candidates. For one of them, Cygnus X-3, a time-dependent search for neutrino emission in coincidence with observed radio and X-ray flares has been performed. No evidence of a signal is found, and upper limits are reported for each of the searches. We investigate neutrino spectra proportional to E {sup -2} and E {sup -3} in order to cover the entire range of possible neutrino spectra. The steeply falling E {sup -3} neutrino spectrum can also be used to approximate neutrino energy spectra with energy cutoffs below 50 TeV since these result in a similar energy distribution of events in the detector. For the region of the Galactic plane visible in the Northern sky, the 90% confidence level muon neutrino flux upper limits are in the range E {sup 3} dN/dE {approx} 5.4-19.5 Multiplication-Sign 10{sup -11} TeV{sup 2} cm{sup -2} s{sup -1} for point-like neutrino sources in the energy region [180.0 GeV-20.5 TeV]. These represent the most stringent upper limits for soft-spectra neutrino sources within the Galaxy reported to date.« less
Radiation Coupling with the FUN3D Unstructured-Grid CFD Code
NASA Technical Reports Server (NTRS)
Wood, William A.
2012-01-01
The HARA radiation code is fully-coupled to the FUN3D unstructured-grid CFD code for the purpose of simulating high-energy hypersonic flows. The radiation energy source terms and surface heat transfer, under the tangent slab approximation, are included within the fluid dynamic ow solver. The Fire II flight test, at the Mach-31 1643-second trajectory point, is used as a demonstration case. Comparisons are made with an existing structured-grid capability, the LAURA/HARA coupling. The radiative surface heat transfer rates from the present approach match the benchmark values within 6%. Although radiation coupling is the focus of the present work, convective surface heat transfer rates are also reported, and are seen to vary depending upon the choice of mesh connectivity and FUN3D ux reconstruction algorithm. On a tetrahedral-element mesh the convective heating matches the benchmark at the stagnation point, but under-predicts by 15% on the Fire II shoulder. Conversely, on a mixed-element mesh the convective heating over-predicts at the stagnation point by 20%, but matches the benchmark away from the stagnation region.
VizieR Online Data Catalog: AKARI NEP Survey sources at 18um (Pearson+, 2014)
NASA Astrophysics Data System (ADS)
Pearson, C. P.; Serjeant, S.; Oyabu, S.; Matsuhara, H.; Wada, T.; Goto, T.; Takagi, T.; Lee, H. M.; Im, M.; Ohyama, Y.; Kim, S. J.; Murata, K.
2015-04-01
The NEP-Deep survey at 18u in the IRC-L18W band is constructed from a total of 87 individual pointed observations taken between May 2006 to August 2007, using the IRC Astronomical Observing Template (AOT) designed for deep observations (IRC05), with approximately 2500 second exposures per IRC filter in all mid-infrared bands. The deep imaging IRC05 AOT has no explicit dithering built into the AOT operation, therefore dithering is achieved by layering separate pointed observations on at least three positions on a given piece of sky. The NEP-Wide survey consists of 446 pointed observations with .300 second exposures for each filter. The NEP-Wide survey uses the shallower IRC03 AOT optimized for large area multi-band mapping with the dithering included within the AOT. Note that for both surveys, although images are taken simultaneously in all three IRC channels, the target area of sky in the MIR-L channel is offset from the corresponding area of sky in the NIR/MIR-S channel by ~20arcmin. (2 data files).
From 16-bit to high-accuracy IDCT approximation: fruits of single architecture affliation
NASA Astrophysics Data System (ADS)
Liu, Lijie; Tran, Trac D.; Topiwala, Pankaj
2007-09-01
In this paper, we demonstrate an effective unified framework for high-accuracy approximation of the irrational co-effcient floating-point IDCT by a single integer-coeffcient fixed-point architecture. Our framework is based on a modified version of the Loeffler's sparse DCT factorization, and the IDCT architecture is constructed via a cascade of dyadic lifting steps and butterflies. We illustrate that simply varying the accuracy of the approximating parameters yields a large family of standard-compliant IDCTs, from rare 16-bit approximations catering to portable computing to ultra-high-accuracy 32-bit versions that virtually eliminate any drifting effect when pairing with the 64-bit floating-point IDCT at the encoder. Drifting performances of the proposed IDCTs along with existing popular IDCT algorithms in H.263+, MPEG-2 and MPEG-4 are also demonstrated.
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Interacting charges and the classical electron radius
NASA Astrophysics Data System (ADS)
De Luca, Roberto; Di Mauro, Marco; Faella, Orazio; Naddeo, Adele
2018-03-01
The equation of the motion of a point charge q repelled by a fixed point-like charge Q is derived and studied. In solving this problem useful concepts in classical and relativistic kinematics, in Newtonian mechanics and in non-linear ordinary differential equations are revised. The validity of the approximations is discussed from the physical point of view. In particular the classical electron radius emerges naturally from the requirement that the initial distance is large enough for the non-relativistic approximation to be valid. The relevance of this topic for undergraduate physics teaching is pointed out.
Infrared photometry of the black hole candidate Sagittarius A*
NASA Technical Reports Server (NTRS)
Close, Laird M.; Mccarthy, Donald W. JR.; Melia, Fulvio
1995-01-01
An infrared source has been imaged within 0.2 +/- 0.3 arcseconds of the unique Galactic center radio source Sgr A* High angular resolution (averaged value of the Full Width at Half Maximum (FWHM) approximately 0.55 arcseconds) was achieved by rapid (approximately 50 Hz) real-time images motion compensation. The source's near-infrared magnitudes (K = 12.1 +/- 0.3, H = 13.7 +/- 0.3, and J = 16.6 +/- 0.4) are consistent with a hot object reddened by the local extinction A(sub v) approximately 27). At the 3 sigma level of confidence, a time series of 80 images limits the source variability to less than 50% on timescales from 3 to 30 minutes. The photometry is consistent with the emission from a simple accretion disk model for a approximately 1 x 10(exp 6) solar mass black hole. However, the fluxes are also consistent with a hot luminous (L approximately 10(exp 3.5) to 10(exp 4-6) solar luminosity) central cluster star positionally coincident with Sgr A*.
Modeling the Swift BAT Trigger Algorithm with Machine Learning
NASA Technical Reports Server (NTRS)
Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori
2015-01-01
To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. (2014) is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of approximately greater than 97% (approximately less than 3% error), which is a significant improvement on a cut in GRB flux which has an accuracy of 89:6% (10:4% error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of eta(sub 0) approximately 0.48(+0.41/-0.23) Gpc(exp -3) yr(exp -1) with power-law indices of eta(sub 1) approximately 1.7(+0.6/-0.5) and eta(sub 2) approximately -5.9(+5.7/-0.1) for GRBs above and below a break point of z(sub 1) approximately 6.8(+2.8/-3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online.
Purdy, P H; Tharp, N; Stewart, T; Spiller, S F; Blackburn, H D
2010-10-15
Boar semen is typically collected, diluted and cooled for AI use over numerous days, or frozen immediately after shipping to capable laboratories. The storage temperature and pH of the diluted, cooled boar semen could influence the fertility of boar sperm. Therefore, the purpose of this study was to determine the effects of pH and storage temperature on fresh and frozen-thawed boar sperm motility end points. Semen samples (n = 199) were collected, diluted, cooled and shipped overnight to the National Animal Germplasm Program laboratory for freezing and analysis from four boar stud facilities. The temperature, pH and motility characteristics, determined using computer automated semen analysis, were measured at arrival. Samples were then cryopreserved and post-thaw motility determined. The commercial stud was a significant source of variation for mean semen temperature and pH, as well as total and progressive motility, and numerous other sperm motility characteristics. Based on multiple regression analysis, pH was not a significant source of variation for fresh or frozen-thawed boar sperm motility end points. However, significant models were derived which demonstrated that storage temperature, boar, and the commercial stud influenced sperm motility end points and the potential success for surviving cryopreservation. We inferred that maintaining cooled boar semen at approximately 16 °C during storage will result in higher fresh and frozen-thawed boar sperm quality, which should result in greater fertility. Copyright © 2010 Elsevier Inc. All rights reserved.
Fast simulation of yttrium-90 bremsstrahlung photons with GATE.
Rault, Erwann; Staelens, Steven; Van Holen, Roel; De Beenhouwer, Jan; Vandenberghe, Stefaan
2010-06-01
Multiple investigators have recently reported the use of yttrium-90 (90Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging for the dosimetry of targeted radionuclide therapies. Because Monte Carlo (MC) simulations are useful for studying SPECT imaging, this study investigates the MC simulation of 90Y bremsstrahlung photons in SPECT. To overcome the computationally expensive simulation of electrons, the authors propose a fast way to simulate the emission of 90Y bremsstrahlung photons based on prerecorded bremsstrahlung photon probability density functions (PDFs). The accuracy of bremsstrahlung photon simulation is evaluated in two steps. First, the validity of the fast bremsstrahlung photon generator is checked. To that end, fast and analog simulations of photons emitted from a 90Y point source in a water phantom are compared. The same setup is then used to verify the accuracy of the bremsstrahlung photon simulations, comparing the results obtained with PDFs generated from both simulated and measured data to measurements. In both cases, the energy spectra and point spread functions of the photons detected in a scintillation camera are used. Results show that the fast simulation method is responsible for a 5% overestimation of the low-energy fluence (below 75 keV) of the bremsstrahlung photons detected using a scintillation camera. The spatial distribution of the detected photons is, however, accurately reproduced with the fast method and a computational acceleration of approximately 17-fold is achieved. When measured PDFs are used in the simulations, the simulated energy spectrum of photons emitted from a point source of 90Y in a water phantom and detected in a scintillation camera closely approximates the measured spectrum. The PSF of the photons imaged in the 50-300 keV energy window is also accurately estimated with a 12.4% underestimation of the full width at half maximum and 4.5% underestimation of the full width at tenth maximum. Despite its limited accuracy, the fast bremsstrahlung photon generator is well suited for the simulation of bremsstrahlung photons emitted in large homogeneous organs, such as the liver, and detected in a scintillation camera. The computational acceleration makes it very useful for future investigations of 90Y bremsstrahlung SPECT imaging.
Monitoring the variability of active galactic nuclei from a space-based platform
NASA Technical Reports Server (NTRS)
Peterson, Bradley M.; Atwood, Bruce; Byard, Paul L.
1994-01-01
Detailed monitoring of AGN's with FRESIP can provide well-sampled light curves for a large number of AGN's. Such data are completely unprecedented in this field, and will provide powerful new constraints on the origin of the UV/optical continuum in AGN's. The FRESIP baseline design will allow 1 percent photometry on sources brighter than V approximately equals 19.6 mag, and we estimate that over 300 sources can be studied. We point out that digitization effects will have a significant negative impact on the faint limit and the number of detectable sources will decrease dramatically if a fixed gain setting (estimated to be nominally 25 e(-) per ADU) is used for all read-outs. We note that the primary limitation to studying AGN's is background (sky and read-out noise) rather than source/background contrast with a focused telescope and by longer integrations. While we believe that it may be possible to achieve the AGN-monitoring science goals with a more compact and much less expensive telescope, the proposed FRESIP satellite affords an excellent opportunity to attain the required data at essentially zero cost as a secondary goal of a more complex mission.
VizieR Online Data Catalog: KGS EoR0 Catalogue (Carroll+, 2016)
NASA Astrophysics Data System (ADS)
Carroll, P. A.; Line, J.; Morales, M. F.; Barry, N.; Beardsley, A. P.; Hazelton, B. J.; Jacobs, D. C.; Pober, J. C.; Sullivan, I. S.; Webster, R. L.; Bernardi, G.; Bowman, J. D.; Briggs, F.; Cappallo, R. J.; Corey, B. E.; de Oliveira-Costa, A.; Dillon, J. S.; Emrich, D.; Ewall-Wice, A.; Feng, L.; Gaensler, B. M.; Goeke, R.; Greenhill, L. J.; Hewitt, J. N.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kaplan, D. L.; Kasper, J. C.; Kim, Hs.; Kratzenberg, E.; Lenc, E.; Loeb, A.; Lonsdale, C. J.; Lynch, M. J.; McKinley, B.; McWhirter, S. R.; Mitchell, D. A.; Morgan, E.; Neben, A. R.; Oberoi, D.; Offringa, A. R.; Ord, S. M.; Paul, S.; Pindor, B.; Prabu, T.; Procopio, P.; Riding, J.; Rogers, A. E. E.; Roshi, A.; Udaya Shankar, N.; Sethi, S. K.; Srivani, K. S.; Subrahmanyan, R.; Tegmark, M.; Thyagarajan, N.; Tingay, S. J.; Trott, C. M.; Waterson, M.; Wayth, R. B.; Whitney, A. R.; Williams, A.; Williams, C. L.; Wu, C.; Wyithe, J. S. B.
2018-01-01
The MWA EoR0 field is centred at RA=0h and Dec=-27°, and was chosen because it has no bright complex sources in the primary field of view. The FWHM of the antenna beam is approximately 20°, but sources in the edges of the beam and first few side lobes are clearly visible and should be subtracted (Thyagarajan et al., 2015ApJ...804...14T, 2015ApJ...807L..28T; Pober et al., 2016ApJ...819....8P). For this catalogue we concentrate on identifying sources in the primary beam but go out to the 5 per cent power point (nearly first beam null, ~1400deg2). The data for this catalogue include 752min snapshot observations (112s consecutive integrations with 8s gaps) from the night of 2013 August 23. The observations were made at 182MHz with 31MHz bandwidth and cover 2.5h in total. This process for source finding, measurement, and classification has been termed KATALOGSS (KDD Astrometry, Trueness, and Apparent Luminosity of Galaxies in Snapshot Surveys; hereafter abbreviated to KGS). (1 data file).
The angular distribution of diffusely backscattered light
NASA Astrophysics Data System (ADS)
Vera, M. U.; Durian, D. J.
1997-03-01
The diffusion approximation predicts the angular distribution of light diffusely transmitted through an opaque slab to depend only on boundary reflectivity, independent of scattering anisotropy, and this has been verified by experiment(M.U. Vera and D.J. Durian, Phys. Rev. E 53) 3215 (1996). Here, by contrast, we demonstrate that the angular distribution of diffusely backscattered light depends on scattering anisotropy as well as boundary reflectivity. To model this observation scattering anisotropy is added to the diffusion approximation by a discontinuity in the photon concentration at the source point that is proportional to the average cosine of the scattering angle. We compare the resulting predictions with random walk simulations and with measurements of diffusely backscattered intensity versus angle for glass frits and aqueous suspensions of polystyrene spheres held in air or immersed in a water bath. Increasing anisotropy and boundary reflectivity each tend to flatten the predicted distributions, and for different combinations of anisotropy and reflectivity the agreement between data and predictions ranges from qualitatively to quantitatively good.
NASA Astrophysics Data System (ADS)
Matinfar, Mehdi D.; Salehi, Jawad A.
2009-11-01
In this paper we analytically study and evaluate the performance of a Spectral-Phase-Encoded Optical CDMA system for different parameters such as the user's code length and the number of users in the network. In this system an advanced receiver structure in which the Second Harmonic Generation effect imposed in a thick crystal is employed as the nonlinear pre-processor prior to the conventional low speed photodetector. We consider ASE noise of the optical amplifiers, effective in low power conditions, besides the multiple access interference (MAI) noise which is the dominant source of noise in any OCDMA communications system. We use the results of the previous work which we analyzed the statistical behavior of the thick crystals in an optically amplified digital lightwave communication system to evaluate the performance of the SPE-OCDMA system with thick crystals receiver structure. The error probability is evaluated using Saddle-Point approximation and the approximation is verified by Monte-Carlo simulation.
Linking Supermarket Sales Data To Nutritional Information: An Informatics Feasibility Study
Brinkerhoff, Kristina M.; Brewster, Philip J.; Clark, Edward B.; Jordan, Kristine C.; Cummins, Mollie R.; Hurdle, John F.
2011-01-01
Grocery sales are a data source of potential value to dietary assessment programs in public health informatics. However, the lack of a computable method for mapping between nutrient and food item information represents a major obstacle. We studied the feasibility of linking point-of-sale data to USDA-SR nutrient database information in a sustainable way. We analyzed 2,009,533 de-identified sales items purchased by 32,785 customers over a two-week period. We developed a method using the item category hierarchy in the supermarket’s database to link purchased items to records from the USDA-SR. We describe our methodology and its rationale and limitations. Approximately 70% of all items were mapped and linked to the SR; approximately 90% of all items could be mapped with an equivalent expenditure of additional effort. 100% of all items were mapped to USDA standard food groups. We conclude that mapping grocery sales data to nutritional information is feasible. PMID:22195115
Importance of diffuse pollution control in the Patzcuaro Lake Basin in Mexico.
Carro, Marco Mijangos; Dávila, Jorge Izurieta; Balandra, Antonieta Gómez; López, Rubén Hernández; Delgadillo, Rubén Huerto; Chávez, Javier Sánchez; Inclán, Luís Bravo
2008-01-01
In the catchment area of the Lake Patzcuaro in Central Mexico (933 km2) the apportionments of erosion, sediment, nutrients and pathogen coming from thirteen micro basins were estimated with the purpose of identifying critical areas in which best management practices need to be implemented in order to reduce their contribution to the lake pollution and eutrophication. The ArcView Generalized Watershed Loading Functions model (AV-GWLF) was applied to estimate the loads and sources of nutrients. The main results show that the total annual contribution of nitrogen from point sources were 491 tons and from diffuse pollution 2,065 tons, whereas phosphorus loads where 116 and 236 tons, respectively during a thirty year simulation period. Micro basins with predominant agricultural and animal farm land use (56% of the total area) accounts for a high percentage of nitrogen load 33% and phosphorus 52%. On the other hand, Patzcuaro and Quiroga micro basins which comprise approximately 10% of the total catchment area and are the most populated and visited towns by tourist 686,000 people every year, both contributes with 10.1% of the total nitrogen load and 3.2% of phosphorus. In terms of point sources of nitrogen and phosphorus the last towns contribute with 23.5% and 26.6% respectively. Under this situation the adoption of best management practices are an imperative task since the sedimentation and pollution in the lake has increased dramatically in the last twenty years. Copyright (c) IWA Publishing 2008.
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
Application of the QSPR approach to the boiling points of azeotropes.
Katritzky, Alan R; Stoyanova-Slavova, Iva B; Tämm, Kaido; Tamm, Tarmo; Karelson, Mati
2011-04-21
CODESSA Pro derivative descriptors were calculated for a data set of 426 azeotropic mixtures by the centroid approximation and the weighted-contribution-factor approximation. The two approximations produced almost identical four-descriptor QSPR models relating the structural characteristic of the individual components of azeotropes to the azeotropic boiling points. These models were supported by internal and external validations. The descriptors contributing to the QSPR models are directly related to the three components of the enthalpy (heat) of vaporization.
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
Analytical approximation of a distorted reflector surface defined by a discrete set of points
NASA Technical Reports Server (NTRS)
Acosta, Roberto J.; Zaman, Afroz A.
1988-01-01
Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.
36 CFR 7.10 - Zion National Park.
Code of Federal Regulations, 2010 CFR
2010-07-01
... unplowed, graded dirt road from the park boundary in the southeast corner of Sec. 13, T. 39 S., R. 11 W... distance of approximately one mile. (4) The unplowed, graded dirt road from the Lava Point Ranger Station... approximately two miles. (5) The unplowed, graded dirt road from the Lava Point Ranger Station, north to the...
NASA Astrophysics Data System (ADS)
Geist, E. L.; Kirby, S. H.; Ross, S.; Dartnell, P.
2009-12-01
A non-double couple component associated with the Mw=8.0 September 29, 2009 Samoa earthquake is investigated to explain direct tsunami arrivals at deep-ocean pressure sensors (i.e., DART stations). In particular, we seek a tsunami generation model that correctly predicts the polarity of first motions: negative at the Apia station (#51425) NW of the epicenter and positive at the Tonga (#51426) and Aukland (#54401) stations south of the epicenter. Slip on a single, finite fault corresponding to either nodal plane of the best-fitting double couple fails to predict the positive first-motion polarity observed at the southerly (Tonga and Aukland) DART stations. The Samoa earthquake has a significant non-double component as measured by the compensated linear vector dipole (CLVD) ratio that ranges from |ɛ|=0.15 (USGS CMT) to |ɛ| =0.37 (Global CMT). To test what effect the non-double component has on tsunami generation, the static elastic displacement field at the sea floor is computed from the full moment tensor. This displacement field represents the initial conditions for tsunami propagation computed using a finite-difference approximation to the linear shallow-water wave equations. The tsunami waveforms calculated from the full moment tensor are consistent with the observed polarities at all of the DART stations. The static displacement field is then decomposed into double-couple and non-double couple components to determine the relative contribution of each to the tsunami wavefield. Although a point-source approximation to the tsunami source is typically inadequate at near-field and regional distances, finite-fault inversions of the 2009 Samoa earthquake indicate that peak slip is spatially concentrated near the hypocenter, suggesting that the point-source representation may be acceptable in this case. Generation of the 2009 Samoa tsunami may involve earthquake rupture on multiple faults and/or along curved faults, both of which are observed from multibeam bathymetry in the epicentral region. The exact rupture path of the earthquake is presently unclear. It is evident from seismological and tsunami observations of the 2009 Samoa event, however, that uniform slip on a single, planar fault cannot explain all aspects of the observed tsunami wavefield.
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Distribution patterns of mercury in Lakes and Rivers of northeastern North America
Dennis, Ian F.; Clair, Thomas A.; Driscoll, Charles T.; Kamman, Neil; Chalmers, Ann T.; Shanley, Jamie; Norton, Stephen A.; Kahl, Steve
2005-01-01
We assembled 831 data points for total mercury (Hgt) and 277 overlapping points for methyl mercury (CH3Hg+) in surface waters from Massachussetts, USA to the Island of Newfoundland, Canada from State, Provincial, and Federal government databases. These geographically indexed values were used to determine: (a) if large-scale spatial distribution patterns existed and (b) whether there were significant relationships between the two main forms of aquatic Hg as well as with total organic carbon (TOC), a well know complexer of metals. We analyzed the catchments where samples were collected using a Geographical Information System (GIS) approach, calculating catchment sizes, mean slope, and mean wetness index. Our results show two main spatial distribution patterns. We detected loci of high Hgt values near urbanized regions of Boston MA and Portland ME. However, except for one unexplained exception, the highest Hgt and CH3Hg+ concentrations were located in regions far from obvious point sources. These correlated to topographically flat (and thus wet) areas that we relate to wetland abundances. We show that aquatic Hgt and CH3Hg+ concentrations are generally well correlated with TOC and with each other. Over the region, CH3Hg+ concentrations are typically approximately 15% of Hgt. There is an exception in the Boston region where CH3Hg+ is low compared to the high Hgt values. This is probably due to the proximity of point sources of inorganic Hg and a lack of wetlands. We also attempted to predict Hg concentrations in water with statistical models using catchment features as variables. We were only able to produce statistically significant predictive models in some parts of regions due to the lack of suitable digital information, and because data ranges in some regions were too narrow for meaningful regression analyses.
NASA Astrophysics Data System (ADS)
Xia, Ya-Rong; Zhang, Shun-Li; Xin, Xiang-Peng
2018-03-01
In this paper, we propose the concept of the perturbed invariant subspaces (PISs), and study the approximate generalized functional variable separation solution for the nonlinear diffusion-convection equation with weak source by the approximate generalized conditional symmetries (AGCSs) related to the PISs. Complete classification of the perturbed equations which admit the approximate generalized functional separable solutions (AGFSSs) is obtained. As a consequence, some AGFSSs to the resulting equations are explicitly constructed by way of examples.
The TexOx-1000 redshift survey of radio sources I: the TOOT00 region
NASA Astrophysics Data System (ADS)
Vardoulaki, Eleni; Rawlings, Steve; Hill, Gary J.; Mauch, Tom; Inskip, Katherine J.; Riley, Julia; Brand, Kate; Croft, Steve; Willott, Chris J.
2010-01-01
We present optical spectroscopy, near-infrared (mostly K-band) and radio (151-MHz and 1.4-GHz) imaging of the first complete region (TOOT00) of the TexOx-1000 (TOOT) redshift survey of radio sources. The 0.0015-sr (~5 deg2) TOOT00 region is selected from pointed observations of the Cambridge Low-Frequency Survey Telescope at 151 MHz at a flux density limit of ~=100 mJy, approximately five times fainter than the 7C Redshift Survey (7CRS), and contains 47 radio sources. We have obtained 40 spectroscopic redshifts (~85 per cent completeness). Adding redshifts estimated for the seven other cases yields a median redshift zmed ~ 1.25. We find a significant population of objects with Fanaroff-Riley type I (FRI) like radio structures at radio luminosities above both the low-redshift FRI/II break and the break in the radio luminosity function. The redshift distribution and subpopulations of TOOT00 are broadly consistent with extrapolations from the 7CRS/6CE/3CRR data sets underlying the SKADS Simulated Skies Semi-Empirical Extragalactic Data base, S3-SEX.
A Chandra High-Resolution X-ray Image of Centaurus A.
Kraft; Forman; Jones; Kenter; Murray; Aldcroft; Elvis; Evans; Fabbiano; Isobe; Jerius; Karovska; Kim; Prestwich; Primini; Schwartz; Schreier; Vikhlinin
2000-03-01
We present first results from a Chandra X-Ray Observatory observation of the radio galaxy Centaurus A with the High-Resolution Camera. All previously reported major sources of X-ray emission including the bright nucleus, the jet, individual point sources, and diffuse emission are resolved or detected. The spatial resolution of this observation is better than 1&arcsec; in the center of the field of view and allows us to resolve X-ray features of this galaxy not previously seen. In particular, we resolve individual knots of emission in the inner jet and diffuse emission between the knots. All of the knots are diffuse at the 1&arcsec; level, and several exhibit complex spatial structure. We find the nucleus to be extended by a few tenths of an arcsecond. Our image also suggests the presence of an X-ray counterjet. Weak X-ray emission from the southwest radio lobe is also seen, and we detect 63 pointlike galactic sources (probably X-ray binaries and supernova remnants) above a luminosity limit of approximately 1.7x1037 ergs s-1.
Radio Identifications of UGC Galaxies - Starbursts and Monsters
NASA Astrophysics Data System (ADS)
Condon, J. J.; Broderick, J. J.
1995-11-01
Radio identifications of galaxies in the Uppsala General Catalogue of Galaxies with delta < +82 degrees were made from the Green Bank 1400 MHz sky maps. Every source having peak flux density S(P) >= 150 mJy in the approximately 12 arcmin FWHM map point-source response and position < 5 arcmin in both coordinates from the optical position of any UGC galaxy was considered a candidate identification to ensure that very extended (up to 1 Mpc) and asymmetric sources would not be missed. Maps in the literature or new 1.49 GHz VLA C-array maps made with 18 arcsec FWHM resolution were used to confirm or reject candidate identifications. The maps in this directory include both confirmed identifications and candidates rejected because of confusion or low flux density. For more information on this study, please see the following reference: Condon, J. J., and Broderick, J. J., 1988, AJ, 96, 30. The images and related TeX file come from the NRAO CDROM "Images From the Radio Universe" (c. 1992 National Radio Astronomy Observatory, used with permission).
Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam
2016-01-01
Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.
Swirling plumes and spinning tops
NASA Astrophysics Data System (ADS)
Frank, Daria; Landel, Julien; Dalziel, Stuart; Linden, Paul
2017-11-01
Motivated by potential effects of the Earth's rotation on the dynamics of the oil plume resulting from the Deepwater Horizon disaster in 2010, we conducted laboratory experiments on saltwater and bubble axisymmetric point plumes in a homogeneous rotating environment. The effect of rotation is conventionally characterized by a Rossby number, based on the source buoyancy flux, the rotation rate of the system and the total water depth and which ranged from 0.02 to 1.3 in our experiments. In the range of parameters studied, we report a striking new physical instability in the plume dynamics near the source. After approximately one rotation period, the plume axis tilts away laterally from the centreline and the plume starts to precess in the anticyclonic direction. We find that the mean precession frequency of the plume scales linearly with the rotation rate of the environment. Surprisingly, the precession frequency is found to be independent of the diameter of the plume nozzle, the source buoyancy flux, the water depth and the geometry of the domain. In this talk, we present our experimental results and develop simple theoretical toy models to explain the observed plume behaviour.
Source Mechanism of the November 27, 1945 Tsunami in the Makran Subduction Zone
NASA Astrophysics Data System (ADS)
Heidarzadeh, M.; Satake, K.
2011-12-01
We study the source of the Makran tsunami of November 27, 1945 using newly-available tide gauge data from this large tsunami. Makran subduction zone at the northwestern Indian Ocean is the result of northward subduction of the Arabian plate beneath the Eurasian one at an approximate rate of 2 cm/year. Makran was the site of a large tsunamigenic earthquake in November 1945 (Mw 8.1) which caused widespread destruction as well as a death toll of about 4000 people at the coastal areas of the northwestern Indian Ocean. Although Makran experienced at least several large tsunamigenic earthquakes in the past several hundred years, the 1945 event is the only instrumentally-recorded tsunamigenic earthquake in the region, thus it is an important event in view of tsunami hazard assessment in the region. However, the source of this tsunami was poorly studied in the past as no tide gauge data was available for this tsunami to verify the tsunami source. In this study, we use two tide gauge data for the November 27, 1945 tsunami recorded at Mumbai and Karachi at approximate distances of 1100 and 350 km, respectively, away from the epicenter to constrain the tsunami source. Besides the two tide gauge data, that were recently published by Neetu et al. (2011, Natural Hazards), some reports about the arrival times and wave heights of tsunami at different locations both in the near-field (e.g., Pasni and Ormara) and far-field (e.g., Seychelles) are available which will be used to further constrain the source. In addition, the source mechanism of the 27 November 1945 tsunami determined using seismic data will be used as the start point for this study. Several reports indicate that a secondary source triggered by the main shock possibly contributed to the main plate boundary rupture during this large interplate earthquake, e.g., landslides or splay faults. For example, a runup height up to 12 m was reported in Pasni, the nearest coast to the tsunami source, which seems too hard to be linked with a plate boundary event with a maximum slip of around 6 m. Therefore, possible contribution of secondary tsunami sources also will be examined.
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
A new approach to blind deconvolution of astronomical images
NASA Astrophysics Data System (ADS)
Vorontsov, S. V.; Jefferies, S. M.
2017-05-01
We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.
Bayesian approach for counting experiment statistics applied to a neutrino point source analysis
NASA Astrophysics Data System (ADS)
Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.
2013-12-01
In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.
Picosecond excimer laser-plasma x-ray source for microscopy, biochemistry, and lithography
NASA Astrophysics Data System (ADS)
Turcu, I. C. Edmond; Ross, Ian N.; Trenda, P.; Wharton, C. W.; Meldrum, R. A.; Daido, Hiroyuki; Schulz, M. S.; Fluck, P.; Michette, Alan G.; Juna, A. P.; Maldonado, Juan R.; Shields, Harry; Tallents, Gregory J.; Dwivedi, L.; Krishnan, J.; Stevens, D. L.; Jenner, T.; Batani, Dimitri; Goodson, H.
1994-02-01
At Rutherford Appleton Laboratory we developed a high repetition rate, picosecond, excimer laser system which generates a high temperature and density plasma source emitting approximately 200 mW (78 mW/sr) x ray average power at h(nu) approximately 1.2 KeV or 0.28 KeV < h(nu) < 0.53 KeV (the `water window'). At 3.37 nm wavelength the spectral brightness of the source is approximately 9 X 1011 photons/s/mm2/mrad2/0.1% bandwidth. The x-ray source serves a large user community for applications such as: scanning and holographic microscopy, the study of the biochemistry of DNA damage and repair, microlithography and spectroscopy.
Transmitter pointing loss calculation for free-space optical communications link analyses
NASA Technical Reports Server (NTRS)
Marshall, William K.
1987-01-01
In calculating the performance of free-space optical communications links, the transmitter pointing loss is one of the two most important factors. It is shown in this paper that the traditional formula for the instantaneous pointing loss (i.e., for the transmitter telescope far-field beam pattern) is quite inaccurate. A more accurate and practical approximation is developed in which the pointing loss is calculated using a Taylor series approximation. The four-term series is shown to be accurate to 0.1 dB for the theta angles not greater than 0.9 lambda/D (wavelength/telescope diameter).
Emission Patterns of Solar Type III Radio Bursts: Stereoscopic Observations
NASA Technical Reports Server (NTRS)
Thejappa, G.; MacDowall, R.; Bergamo, M.
2012-01-01
Simultaneous observations of solar type III radio bursts obtained by the STEREO A, B, and WIND spacecraft at low frequencies from different vantage points in the ecliptic plane are used to determine their directivity. The heliolongitudes of the sources of these bursts, estimated at different frequencies by assuming that they are located on the Parker spiral magnetic field lines emerging from the associated active regions into the spherically symmetric solar atmosphere, and the heliolongitudes of the spacecraft are used to estimate the viewing angle, which is the angle between the direction of the magnetic field at the source and the line connecting the source to the spacecraft. The normalized peak intensities at each spacecraft Rj = Ij /[Sigma]Ij (the subscript j corresponds to the spacecraft STEREO A, B, and WIND), which are defined as the directivity factors are determined using the time profiles of the type III bursts. It is shown that the distribution of the viewing angles divides the type III bursts into: (1) bursts emitting into a very narrow cone centered around the tangent to the magnetic field with angular width of approximately 2 deg and (2) bursts emitting into a wider cone with angular width spanning from [approx] -100 deg to approximately 100 deg. The plots of the directivity factors versus the viewing angles of the sources from all three spacecraft indicate that the type III emissions are very intense along the tangent to the spiral magnetic field lines at the source, and steadily fall as the viewing angles increase to higher values. The comparison of these emission patterns with the computed distributions of the ray trajectories indicate that the intense bursts visible in a narrow range of angles around the magnetic field directions probably are emitted in the fundamental mode, whereas the relatively weaker bursts visible to a wide range of angles are probably emitted in the harmonic mode.
Sources of dioxins in the United Kingdom: the steel industry and other sources.
Anderson, David R; Fisher, Raymond
2002-01-01
Several countries have compiled national inventories of dioxin (polychlorinated dibenzo-p-dioxin [PCDD] and polychlorinated dibenzofuran [PCDF]) releases that detail annual mass emission estimates for regulated sources. High temperature processes, such as commercial waste incineration and iron ore sintering used in the production of iron and steel, have been identified as point sources of dioxins. Other important releases of dioxins are from various diffuse sources such as bonfire burning and domestic heating. The PCDD/F inventory for emissions to air in the UK has decreased significantly from 1995 to 1998 because of reduced emissions from waste incinerators which now generally operate at waste gas stack emissions of 1 ng I-TEQ/Nm3 or below. The iron ore sintering process is the only noteworthy source of PCDD/Fs at integrated iron and steelworks operated by Corus (formerly British Steel plc) in the UK. The mean waste gas stack PCDD/F concentration for this process is 1,2 ng I-TEQ/Nm3 based on 94 measurements and it has been estimated that this results in an annual mass release of approximately 38 g I-TEQ per annum. Diffuse sources now form a major contribution to the UK inventory as PCDD/Fs from regulated sources have decreased, for example, the annual celebration of Bonfire Night on 5th November in the UK causes an estimated release of 30 g I-TEQ, similar to that emitted by five sinter plants in the UK.
NASA Astrophysics Data System (ADS)
Muhiddin, F. A.; Sulaiman, J.
2017-09-01
The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.
NASA Astrophysics Data System (ADS)
Zeng, Lu-Chuan; Yao, Jen-Chih
2006-09-01
Recently, Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447] introduced the new iterative procedures with errors for approximating the common fixed point of a couple of quasi-contractive mappings and showed the stability of these iterative procedures with errors in Banach spaces. In this paper, we introduce a new concept of a couple of q-contractive-like mappings (q>1) in a Banach space and apply these iterative procedures with errors for approximating the common fixed point of the couple of q-contractive-like mappings. The results established in this paper improve, extend and unify the corresponding ones of Agarwal, Cho, Li and Huang [R.P. Agarwal, Y.J. Cho, J. Li, N.J. Huang, Stability of iterative procedures with errors approximating common fixed points for a couple of quasi-contractive mappings in q-uniformly smooth Banach spaces, J. Math. Anal. Appl. 272 (2002) 435-447], Chidume [C.E. Chidume, Approximation of fixed points of quasi-contractive mappings in Lp spaces, Indian J. Pure Appl. Math. 22 (1991) 273-386], Chidume and Osilike [C.E. Chidume, M.O. Osilike, Fixed points iterations for quasi-contractive maps in uniformly smooth Banach spaces, Bull. Korean Math. Soc. 30 (1993) 201-212], Liu [Q.H. Liu, On Naimpally and Singh's open questions, J. Math. Anal. Appl. 124 (1987) 157-164; Q.H. Liu, A convergence theorem of the sequence of Ishikawa iterates for quasi-contractive mappings, J. Math. Anal. Appl. 146 (1990) 301-305], Osilike [M.O. Osilike, A stable iteration procedure for quasi-contractive maps, Indian J. Pure Appl. Math. 27 (1996) 25-34; M.O. Osilike, Stability of the Ishikawa iteration method for quasi-contractive maps, Indian J. Pure Appl. Math. 28 (1997) 1251-1265] and many others in the literature.
Bio-based thermosetting copolymers of eugenol and tung oil
NASA Astrophysics Data System (ADS)
Handoko, Harris
There has been an increasing demand for novel synthetic polymers made of components derived from renewable sources to cope with the depletion of petroleum sources. In fact, monomers derived vegetable oils and plant sources have shown promising results in forming polymers with good properties. The following is a study of two highly viable renewable sources, eugenol and tung oil (TO) to be copolymerized into fully bio-based thermosets. Polymerization of eugenol required initial methacrylate-functionalization through Steglich esterification and the synthesized methacrylated eugenol (ME) was confirmed by 1H-NMR. Rheological studies showed ideal Newtonian behavior in ME and five other blended ME resins containing 10 -- 50 wt% TO. Free-radical copolymerization using 5 mol% of tert-butyl peroxybenzoate (crosslinking catalyst) and curing at elevated temperatures (90 -- 160 °C) formed a series of soft to rigid highly-crosslinked thermosets. Crosslinked material (89 -- 98 %) in the thermosets were determined by Soxhlet extraction to decrease with increase of TO content (0 -- 30%). Thermosets containing 0 -- 30 wt% TO possessed ultimate flexural (3-point bending) strength of 32.2 -- 97.2 MPa and flexural moduli of 0.6 -- 3.5 GPa, with 3.2 -- 8.8 % strain-to-failure ratio. Those containing 10 -- 40 wt% TO exhibited ultimate tensile strength of 3.3 -- 45.0 MPa and tensile moduli of 0.02 GPa to 1.12 GPa, with 8.5 -- 76.7 % strain-to-failure ratio. Glass transition temperatures ranged from 52 -- 152 °C as determined by DMA in 3-point bending. SEM analysis on fractured tensile test specimens detected a small degree of heterogeneity. All the thermosets are thermally stable up to approximately 300 °C based on 5% weight loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, A
Purpose: Accuboost treatment planning uses dwell times from a nomogram designed with Monte Carlo calculations for round and D-shaped applicators. A quick dose calculation method has been developed for verification of the HDR Brachytherapy dose as a second check. Methods: Accuboost breast treatment uses several round and D-shaped applicators to be used non-invasively with an Ir-192 source from a HDR Brachytherapy afterloader after the breast is compressed in a mammographic unit for localization. The breast thickness, source activity, the prescription dose and the applicator size are entered into a nomogram spreadsheet which gives the dwell times to be manually enteredmore » into the delivery computer. Approximating the HDR Ir-192 as a point source, and knowing the geometry of the round and D-applicators, the distances from the source positions to the midpoint of the central plane are calculated. Using the exposure constant of Ir-192 and medium as human tissue, the dose at a point is calculated as: D(cGy) = 1.254 × A × t/R2, where A is the activity in Ci, t is the dwell time in sec and R is the distance in cm. The dose from each dwell position is added to get the total dose. Results: Each fraction is delivered in two compressions: cranio-caudally and medial-laterally. A typical APBI treatment in 10 fractions requires 20 compressions. For a patient treated with D45 applicators and an average of 5.22 cm thickness, this calculation was 1.63 % higher than the prescription. For another patient using D53 applicators in the CC direction and 7 cm SDO applicators in the ML direction, this calculation was 1.31 % lower than the prescription. Conclusion: This is a simple and quick method to double check the dose on the central plane for Accuboost treatment.« less
Code of Federal Regulations, 2014 CFR
2014-04-01
... east-northeasterly in a straight line approximately 4.1 miles, onto the Inwood map, to the 1,786-foot... 2.1 miles to the 2,086-foot elevation point, section 15, T31N/R1W; then (3) Proceed north-northeasterly in a straight line approximately 0.7 mile to the marked 1,648-foot elevation point (which should...
Code of Federal Regulations, 2013 CFR
2013-04-01
... east-northeasterly in a straight line approximately 4.1 miles, onto the Inwood map, to the 1,786-foot... 2.1 miles to the 2,086-foot elevation point, section 15, T31N/R1W; then (3) Proceed north-northeasterly in a straight line approximately 0.7 mile to the marked 1,648-foot elevation point (which should...
27 CFR 9.233 - Kelsey Bench-Lake County.
Code of Federal Regulations, 2014 CFR
2014-04-01
... mile to the point where the road intersects a straight line drawn westward from the marked 2,493-foot..., approximately 0.8 mile to the first intersection of the eastern boundary of section 26 and the 1,720-foot..., a total distance of approximately 3.25 miles, to the marked 1,439-foot elevation point in section 29...
Falch, Ken Vidar; Detlefs, Carsten; Snigirev, Anatoly; Mathiesen, Ragnvald H
2018-01-01
Analytical expressions for the transmission cross-coefficients for x-ray microscopes based on compound refractive lenses are derived based on Gaussian approximations of the source shape and energy spectrum. The effects of partial coherence, defocus, beam convergence, as well as lateral and longitudinal chromatic aberrations are accounted for and discussed. Taking the incoherent limit of the transmission cross-coefficients, a compact analytical expression for the modulation transfer function of the system is obtained, and the resulting point, line and edge spread functions are presented. Finally, analytical expressions for optimal numerical aperture, coherence ratio, and bandwidth are given. Copyright © 2017 Elsevier B.V. All rights reserved.
Takaki, Yasuhiro; Hayashi, Yuki
2008-07-01
The narrow viewing zone angle is one of the problems associated with electronic holography. We propose a technique that enables the ratio of horizontal and vertical resolutions of a spatial light modulator (SLM) to be altered. This technique increases the horizontal resolution of a SLM several times, so that the horizontal viewing zone angle is also increased several times. A SLM illuminated by a slanted point light source array is imaged by a 4f imaging system in which a horizontal slit is located on the Fourier plane. We show that the horizontal resolution was increased four times and that the horizontal viewing zone angle was increased approximately four times.
Kunz, Martin; Tamura, Nobumichi; Chen, Kai; MacDowell, Alastair A; Celestre, Richard S; Church, Matthew M; Fakra, Sirine; Domning, Edward E; Glossinger, James M; Kirschman, Jonathan L; Morrison, Gregory Y; Plate, Dave W; Smith, Brian V; Warwick, Tony; Yashchuk, Valeriy V; Padmore, Howard A; Ustundag, Ersan
2009-03-01
A new facility for microdiffraction strain measurements and microfluorescence mapping has been built on beamline 12.3.2 at the advanced light source of the Lawrence Berkeley National Laboratory. This beamline benefits from the hard x-radiation generated by a 6 T superconducting bending magnet (superbend). This provides a hard x-ray spectrum from 5 to 22 keV and a flux within a 1 microm spot of approximately 5x10(9) photons/s (0.1% bandwidth at 8 keV). The radiation is relayed from the superbend source to a focus in the experimental hutch by a toroidal mirror. The focus spot is tailored by two pairs of adjustable slits, which serve as secondary source point. Inside the lead hutch, a pair of Kirkpatrick-Baez (KB) mirrors placed in a vacuum tank refocuses the secondary slit source onto the sample position. A new KB-bending mechanism with active temperature stabilization allows for more reproducible and stable mirror bending and thus mirror focusing. Focus spots around 1 microm are routinely achieved and allow a variety of experiments, which have in common the need of spatial resolution. The effective spatial resolution (approximately 0.2 microm) is limited by a convolution of beam size, scan-stage resolution, and stage stability. A four-bounce monochromator consisting of two channel-cut Si(111) crystals placed between the secondary source and KB-mirrors allows for easy changes between white-beam and monochromatic experiments while maintaining a fixed beam position. High resolution stage scans are performed while recording a fluorescence emission signal or an x-ray diffraction signal coming from either a monochromatic or a white focused beam. The former allows for elemental mapping, whereas the latter is used to produce two-dimensional maps of crystal-phases, -orientation, -texture, and -strain/stress. Typically achieved strain resolution is in the order of 5x10(-5) strain units. Accurate sample positioning in the x-ray focus spot is achieved with a commercial laser-triangulation unit. A Si-drift detector serves as a high-energy-resolution (approximately 150 eV full width at half maximum) fluorescence detector. Fluorescence scans can be collected in continuous scan mode with up to 300 pixels/s scan speed. A charge coupled device area detector is utilized as diffraction detector. Diffraction can be performed in reflecting or transmitting geometry. Diffraction data are processed using XMAS, an in-house written software package for Laue and monochromatic microdiffraction analysis.
Source-Type Inversion of the September 03, 2017 DPRK Nuclear Test
NASA Astrophysics Data System (ADS)
Dreger, D. S.; Ichinose, G.; Wang, T.
2017-12-01
On September 3, 2017, the DPRK announced a nuclear test at their Punggye-ri site. This explosion registered a mb 6.3, and was well recorded by global and regional seismic networks. We apply the source-type inversion method (e.g. Ford et al., 2012; Nayak and Dreger, 2015), and the MDJ2 seismic velocity model (Ford et al., 2009) to invert low frequency (0.02 to 0.05 Hz) complete three-component waveforms, and first-motion polarities to map the goodness of fit in source-type space. We have used waveform data from the New China Digital Seismic Network (BJT, HIA, MDJ), Korean Seismic Network (TJN), and the Global Seismograph Network (INCN, MAJO). From this analysis, the event discriminates as an explosion. For a pure explosion model, we find a scalar seismic moment of 5.77e+16 Nm (Mw 5.1), however this model fails to fit the large Love waves registered on the transverse components. The best fitting complete solution finds a total moment of 8.90e+16 Nm (Mw 5.2) that is decomposed as 53% isotropic, 40% double-couple, and 7% CLVD, although the range of isotropic moment from the source-type analysis indicates that it could be as high as 60-80%. The isotropic moment in the source-type inversion is 4.75e16 Nm (Mw 5.05). Assuming elastic moduli from model MDJ2 the explosion cavity radius is approximately 51m, and the yield estimated using Denny and Johnson (1991) is 246kt. Approximately 8.5 minutes after the blast a second seismic event was registered, which is best characterized as a vertically closing horizontal crack, perhaps representing the partial collapse of the blast cavity, and/or a service tunnel. The total moment of the collapse is 3.34e+16 Nm (Mw 4.95). The volumetric moment of the collapse is 1.91e+16 Nm, approximately 1/3 to 1/2 of the explosive moment. German TerraSAR-X observations of deformation (Wang et al., 2017) reveal large radial outward motions consistent with expected deformation for an explosive source, but lack significant vertical motions above the shot point. Forward elastic half-space modeling of the static deformation field indicates that the combination of the explosion and collapse explains the observed deformation to first order. We will present these results as well as a two-step inversion of the explosion in an attempt to better resolve the nature of the non-isotropic radiation of the event.
The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images
NASA Astrophysics Data System (ADS)
Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.
2001-06-01
We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.
Noll, Michael L.; Chu, Anthony
2017-08-14
In 2005, the U.S. Geological Survey began a cooperative study with New York City Department of Environmental Protection to characterize the local groundwater-flow system and identify potential sources of seeps on the southern embankment at the Hillview Reservoir in southern Westchester County, New York. Monthly site inspections at the reservoir indicated an approximately 90-square-foot depression in the land surface directly upslope from a seep that has episodically flowed since 2007. In July 2008, the U.S. Geological Survey surveyed the topography of land surface in this depression area by collecting high-accuracy (resolution less than 1 inch) measurements. A point of origin was established for the topographic survey by using differentially corrected positional data collected by a global navigation satellite system. Eleven points were surveyed along the edge of the depression area and at arbitrary locations within the depression area by using robotic land-surveying techniques. The points were surveyed again in March 2012 to evaluate temporal changes in land-surface altitude. Survey measurements of the depression area indicated that the land-surface altitude at 8 of the 11 points decreased beyond the accepted measurement uncertainty during the 44 months from July 2008 to March 2012. Two additional control points were established at stable locations along Hillview Avenue, which runs parallel to the embankment. These points were measured during the July 2008 survey and measured again during the March 2012 survey to evaluate the relative accuracy of the altitude measurements. The relative horizontal and vertical (altitude) accuracies of the 11 topographic measurements collected in March 2012 were ±0.098 and ±0.060 feet (ft), respectively. Changes in topography at 8 of the 11 points ranged from 0.09 to 0.63 ft and topography remained constant, or within the measurement uncertainty, for 3 of the 11 points.Two cross sections were constructed through the depression area by using land-surface altitude data that were interpolated from positional data collected during the two topographic surveys. Cross section A–A′ was approximately 8.5 ft long and consisted of three surveyed points that trended north to south across the depression. Land-surface altitude change decreased along the entire north-south trending cross section during the 44 months, and ranged from 0.2 to more than 0.6 ft. In general, greater land-surface altitude change was measured north of the midpoint as compared to south of the midpoint of the cross section. Cross section B–B′ was 18 ft long and consisted of six surveyed points that trended east to west across the depression. Land-surface altitude change generally decreased or remained constant along the east-west trending cross section during the 44 months and ranged from 0.0 to 0.3 ft. Volume change of the depression area was calculated by using a three-dimensional geographic information system utility that subtracts interpolated surfaces. The results indicated a net volume loss of approximately 38 ±5 cubic feet of material from the depression area during the 44 months.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Computer-assisted 3D kinematic analysis of all leg joints in walking insects.
Bender, John A; Simpson, Elaine M; Ritzmann, Roy E
2010-10-26
High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
Barlow, Nathaniel S; Schultz, Andrew J; Weinstein, Steven J; Kofke, David A
2015-08-21
The mathematical structure imposed by the thermodynamic critical point motivates an approximant that synthesizes two theoretically sound equations of state: the parametric and the virial. The former is constructed to describe the critical region, incorporating all scaling laws; the latter is an expansion about zero density, developed from molecular considerations. The approximant is shown to yield an equation of state capable of accurately describing properties over a large portion of the thermodynamic parameter space, far greater than that covered by each treatment alone.
NASA Technical Reports Server (NTRS)
Filyushkin, V. V.; Madronich, S.; Brasseur, G. P.; Petropavlovskikh, I. V.
1994-01-01
Based on a derivation of the two-stream daytime-mean equations of radiative flux transfer, a method for computing the daytime-mean actinic fluxes in the absorbing and scattering vertically inhomogeneous atmosphere is suggested. The method applies direct daytime integration of the particular solutions of the two-stream approximations or the source functions. It is valid for any duration of period of averaging. The merit of the method is that the multiple scattering computation is carried out only once for the whole averaging period. It can be implemented with a number of widely used two-stream approximations. The method agrees with the results obtained with 200-point multiple scattering calculations. The method was also tested in runs with a 1-km cloud layer with optical depth of 10, as well as with aerosol background. Comparison of the results obtained for a cloud subdivided into 20 layers with those obtained for a one-layer cloud with the same optical parameters showed that direct integration of particular solutions possesses an 'analytical' accuracy. In the case of the source function interpolation, the actinic fluxes calculated above the one-layer and 20-layer clouds agreed within 1%-1.5%, while below the cloud they may differ up to 5% (in the worst case). The ways of enhancing the accuracy (in a 'two-stream sense') and computational efficiency of the method are discussed.
WISEGAL. WISE for the Galactic Plane
NASA Astrophysics Data System (ADS)
Noriega-Crespo, Alberto
There is truly a community effort to study on a global scale the properties of the Milky Way, like its structure, its star formation and interstellar medium, and to use this knowledge to create accurate templates to understand the properties of extragalactic systems. A testimony of this effort are the multi-wavelength surveys of the Galactic Plane that have been recently carried out or are underway from both the ground (e.g. IPHAS, ATLASGAL, JCMT Galactic Plane Survey) or space (GLIMPSE, MIPSGAL, HiGAL). Adding to this wealth of data is the recent release of approximately 57 percent of the whole sky by the Wide-field Infrared Survey Explorer (WISE) team of their high angular resolution and sensitive mid-IR (3.4, 4.6, 12 and 22 micron) images and point source catalogs, encompassing nearly three quarters of the Galactic Plane, including the less studied regions of the Outer Galaxy. The WISE Atlas Images are spectacular, but to take full advantage of them, they need to be transformed from their default Data Number (DN) units into absolute surface brightness calibrated units. Furthermore, to mitigate the contamination effect of the point sources on the extended/diffuse emission, we will remove them and create residual images. This processing will enable a wide range of science projects using the Atlas Images, where measuring the spectral energy distribution of the extended emission is crucial. In this project we propose to transform the W3 (12 micron) and W4 (22 micron) images of the Galactic Plane, in particular of the Outer Galaxy where WISE provides an unique data set, into a background-calibrated, point-source subtracted images using IRIS (DIRBE IRAS Calibrated data). This transformation will allow us to carry out research projects on Massive star formation, the properties of dust in the diffuse ISM, the three dimensional distribution of the dust emission in the Galaxy and the mid/far infrared properties of Supernova Remnants, among others, and to perform a detailed comparison between the characteristics (e.g. star formation rate, dust properties) a of the Inner and Outer Galaxy. The background-calibrated point-source subtracted images will be released to the astronomical community to be fully exploited and to be used in many other science projects, beyond those proposed in this proposal.
ROSAT X-ray sources embedded in the rho Ophiuchi cloud core
NASA Astrophysics Data System (ADS)
Casanova, Sophie; Montmerle, Thierry; Feigelson, Eric D.; Andre, Philippe
1995-02-01
We present a deep ROSAT Position Sensitive Proportional Counter (PSPC) image of the central region of the rho Oph star-forming region. The selected area, about 35 x 35 arcmins in size, is rich with dense molecular cores and young stellar objects (YSOs). Fifty-five reliable X-ray sources are detected (and up to 50 more candidates may be present) above approximately 1 keV,, doubling the number of Einstein sources in this area. These sources are cross-identified with an updated list of 88 YSOs associated with the rho Oph cloud core. A third of the reliable X-ray sources do not have optical counterparts on photographic plates. Most can be cross-identified wth Class II and Class III infrared (IR) sources, which are embedded T Tauri stars, but three reliable X-ray sources and up to seven candidate sources are tentatively identified with Class I protostars. Eighteen reliable, and up to 20 candidate, X-ray sources are probably new cloud members. The overall detection rate of the bona fide cloud population is very high (73% for the Class II and Class III objects). The spatial distribution of the X-ray sources closely follows that of the moleclar gas. The visual extinctions Av estimated from near-IR data) of the ROSAT sources can be as high as 50 or more, confirming that most are embedded in the cloud core and are presumably very young. Using bolometric luminosities Lbol estimated from J-magnitudes a tight correlation between Lx and Lbol is found, similar to that seen for older T Tauri stars in the Cha I cloud: Lx approximately 10-4 Lbol. A general relation Lxproportional to LbolLj seems to apply to all T Tauri-like YSOs. The near equality of the extintion in the IR J band and in the keV X-ray rage implies that this relation is valid for the detected fluxes as well as for the dereddened fluxes. The X-ray luminosity function of the embedded sourced in rho Oph spans a range of Lx approximately 1028.5 to approximately equal to or greater than 1031.5 ergs/s and is statistically indistinguishable from that of X-ray-detected visile T Tauri stars. We estimate a total X-ray luminosity Lx, Oph approximately equal to or greater than 6 x 10 32 ergs/s from approximately equal to 200 X-ray sources in the cloud core, down to Lbol approximately 0.1 solar luminosity or Mstar approximately 0.3 solar mass. We discuss several consequences of in situ irradiation of molecular clouds by X-rays from embedded YSOs. These X-rays must partially ionize the inner regions of circumstellar disk coronae, possibly playing an important role in coupling magnetic ionize the fields and wind or bipolar outflows. Photon-stimulated deportion of large molecules by YSO X-rays may be partly responsible for the bright 12 micrometer halos seen in some molecular clouds.
A Multi-Epoch Timing and Spectral Study of the Ultraluminous X-Ray NGC 5408 X-1 with XMM-Newton
NASA Technical Reports Server (NTRS)
Dheeraj, Pasham; Strohmayer, Tod E.
2012-01-01
We present results of new XMM-Newton observations of the ultraluminous X-ray source (ULX) NGC 5408 X-1, one of the few ULXs to show quasi-periodic oscillations (QPOs). We detect QPOs in each of four new (approximately equal to 100 ks) pointings, expanding the range of frequencies observed from 10 to 40 mHz. We compare our results with the timing and spectral correlations seen in stellar-mass black hole systems, and find that the qualitative nature of the timing and spectral behavior of NGC 5408 X-1 is similar to systems in the steep power-law state exhibiting Type-C QPOs. However, in order for this analogy to quantitatively hold we must only be seeing the so-called saturated portion of the QPO frequency-photon index (or disk flux) relation. Assuming this to be the case, we place a lower limit on the mass of NGC 5408 X-1 of greater than or equal to 800 solar mass. Alternatively, the QPO frequency is largely independent of the spectral parameters, in which case a close analogy with the Type-C QPOs in stellar system is problematic. Measurement of the source's timing properties over a wider range of energy spectral index is needed to definitively resolve this ambiguity. We searched all the available data for both a broad Fe emission line as well as high-frequency QPO analogs (0.1- 1 Hz), but detected neither. We place upper limits on the equivalent width of any Fe emission feature in the 6-7 keV band and of the amplitude (rms) of a high-frequency QPO analog of approximately equal to 10 eV and approximately equal to 4%, respectively.
NASA Technical Reports Server (NTRS)
Greenstadt, E. W.; Le, G.; Strangeway, R. J.
1995-01-01
We review our current knowledge of ULF waves in planetary foreshocks. Most of this knowledge comes from observations taken within a few Earth radii of the terrestrial bow shock. Terrestrial foreshock ULF waves can be divided into three types, large amplitude low frequency waves (approximately 30-s period), upstream propagating whistlers (1-Hz waves), and 3-s waves. The 30-s waves are apparently generated by back-streaming ion beams, while the 1-Hz waves are generated at the bow shock. The source of the 3-s waves has yet to be determined. In addition to issues concerning the source of ULF waves in the foreshock, the waves present a number of challenges, both in terms of data acquisition, and comparison with theory. The various waves have different coherence scales, from approximately 100 km to approximately 1 Earth radius. Thus multi-spacecraft separation strategies must be tailored to the phenomenon of interest. From a theoretical point of view, the ULF waves are observed in a plasma in which the thermal pressure is comparable to the magnetic pressure, and the rest-frame wave frequency can be moderate fraction of the proton gyro-frequency. This requires the use of kinetic plasma wave dispersion relations, rather than multi-fluid MHD. Lastly, and perhaps most significantly, ULF waves are used to probe the ambient plasma, with inferences being drawn concerning the types of energetic ion distributions within the foreshock. However, since most of the data were acquired close to the bow shock, the properties of the more distant foreshock have to be deduced mainly through extrapolation of the near-shock results. A general understanding of the wave and plasma populations within the foreshock, their interrelation, and evolution, requires additional data from the more distant foreshock.
DeVoe, Jennifer E; Wallace, Lorraine S; Pandhi, Nancy; Solotaroff, Rachel; Fryer, George E
2008-01-01
To examine whether having a usual source of care (USC) is associated with positive patient perceptions of health care communication and to identify demographic factors among patients with a USC that are independently associated with differing reports of how patients perceive their involvement in health care decision making. Cross-sectional analyses of nationally representative data from the 2002 Medical Expenditure Panel Survey. Among adults with a health care visit in the past year (n = approximately 16,700), we measured independent associations between having a USC and patient perceptions of health care communication. Second, among respondents with a USC (n = approximately 18,000), we assessed the independent association between various demographic factors and indicators of patients' perceptions of their autonomy in making health care decisions. Approximately 78% of adults in the United States reported having a USC. Those with a USC were more likely to report that providers always listened to them, always explained things clearly, always showed respect, and always spent enough time with them. Patients who perceived higher levels of decision-making autonomy were non-Hispanic, had health insurance coverage, lived in rural areas, and had higher incomes. Patients with a USC were more likely to perceive positive health care interactions. Certain demographic factors among the subgroups of Medical Expenditure Panel Survey respondents with a USC were associated with patient perceptions of greater decision-making autonomy. Efforts to ensure universal access to a USC must be partnered with broader awareness and training of USC providers to engage patients from various demographic backgrounds equally when making health care decisions at the point of care.
NASA Astrophysics Data System (ADS)
Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin
2017-02-01
We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).
Editing wild points in isolation - Fast agreement for reliable systems (Preliminary version)
NASA Technical Reports Server (NTRS)
Kearns, Phil; Evans, Carol
1989-01-01
Consideration is given to the intuitively appealing notion of discarding sensor values which are strongly suspected of being erroneous in a modified approximate agreement protocol. Approximate agreement with editing imposes a time bound upon the convergence of the protocol - no such bound was possible for the original approximate agreement protocol. This new approach is potentially useful in the construction of asynchronous fault tolerant systems. The main result is that a wild-point replacement technique called t-worst editing can be shown to guarantee convergence of the approximate agreement protocol to a valid agreement value. Results are presented for a four-processor synchronous system in which a single processor may be faulty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malin, Martha J.; Bartol, Laura J.; DeWerd, Larry A., E-mail: mmalin@wisc.edu, E-mail: ladewerd@wisc.edu
2015-05-15
Purpose: To investigate why dose-rate constants for {sup 125}I and {sup 103}Pd seeds computed using the spectroscopic technique, Λ{sub spec}, differ from those computed with standard Monte Carlo (MC) techniques. A potential cause of these discrepancies is the spectroscopic technique’s use of approximations of the true fluence distribution leaving the source, φ{sub full}. In particular, the fluence distribution used in the spectroscopic technique, φ{sub spec}, approximates the spatial, angular, and energy distributions of φ{sub full}. This work quantified the extent to which each of these approximations affects the accuracy of Λ{sub spec}. Additionally, this study investigated how the simplified water-onlymore » model used in the spectroscopic technique impacts the accuracy of Λ{sub spec}. Methods: Dose-rate constants as described in the AAPM TG-43U1 report, Λ{sub full}, were computed with MC simulations using the full source geometry for each of 14 different {sup 125}I and 6 different {sup 103}Pd source models. In addition, the spectrum emitted along the perpendicular bisector of each source was simulated in vacuum using the full source model and used to compute Λ{sub spec}. Λ{sub spec} was compared to Λ{sub full} to verify the discrepancy reported by Rodriguez and Rogers. Using MC simulations, a phase space of the fluence leaving the encapsulation of each full source model was created. The spatial and angular distributions of φ{sub full} were extracted from the phase spaces and were qualitatively compared to those used by φ{sub spec}. Additionally, each phase space was modified to reflect one of the approximated distributions (spatial, angular, or energy) used by φ{sub spec}. The dose-rate constant resulting from using approximated distribution i, Λ{sub approx,i}, was computed using the modified phase space and compared to Λ{sub full}. For each source, this process was repeated for each approximation in order to determine which approximations used in the spectroscopic technique affect the accuracy of Λ{sub spec}. Results: For all sources studied, the angular and spatial distributions of φ{sub full} were more complex than the distributions used in φ{sub spec}. Differences between Λ{sub spec} and Λ{sub full} ranged from −0.6% to +6.4%, confirming the discrepancies found by Rodriguez and Rogers. The largest contribution to the discrepancy was the assumption of isotropic emission in φ{sub spec}, which caused differences in Λ of up to +5.3% relative to Λ{sub full}. Use of the approximated spatial and energy distributions caused smaller average discrepancies in Λ of −0.4% and +0.1%, respectively. The water-only model introduced an average discrepancy in Λ of −0.4%. Conclusions: The approximations used in φ{sub spec} caused discrepancies between Λ{sub approx,i} and Λ{sub full} of up to 7.8%. With the exception of the energy distribution, the approximations used in φ{sub spec} contributed to this discrepancy for all source models studied. To improve the accuracy of Λ{sub spec}, the spatial and angular distributions of φ{sub full} could be measured, with the measurements replacing the approximated distributions. The methodology used in this work could be used to determine the resolution that such measurements would require by computing the dose-rate constants from phase spaces modified to reflect φ{sub full} binned at different spatial and angular resolutions.« less
76 FR 20606 - Proposed Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
... source(s) Location of referenced ground [caret] Communities affected elevation ** Elevation in meters (MSL) Effective Modified Sevier County, Utah, and Incorporated Areas Albinus Canyon Approximately 400... Creek Split Flow Approximately 400 feet None +5435 Town of Joseph. downstream of State Highway 118. At...
Analytical Expressions for Deformation from an Arbitrarily Oriented Spheroid in a Half-Space
NASA Astrophysics Data System (ADS)
Cervelli, P. F.
2013-12-01
Deformation from magma chambers can be modeled by an elastic half-space with an embedded cavity subject to uniform pressure change along its interior surface. For a small number of cavity shapes, such as a sphere or a prolate spheroid, closed-form, analytical expressions for deformation have been derived, although these only approximate the uniform-pressure-change boundary condition, with the approximation becoming more accurate as the ratio of source depth to source dimension increases. Using the method of Elshelby [1957] and Yang [1988], which consists of a distribution of double forces and centers of dilatation along the vertical axis, I have derived expressions for displacement from a finite spheroid of arbitrary orientation and aspect ratio that are exact in an infinite elastic medium and approximate in a half-space. The approximation, like those for other cavity shapes, becomes increasingly accurate as the depth to source ratio grows larger, and is accurate to within a few percent in most real-world cases. I have also derived expressions for the deformation-gradient tensor, i.e., the derivatives of each component of displacement with respect to each coordinate direction. These can be transformed easily into the strain and stress tensors. The expressions give deformation both at the surface and at any point within the half-space, and include conditional statements that account for limiting cases that would otherwise prove singular. I have developed MATLAB code for these expressions (and their derivatives), which I use to demonstrate the accuracy of the approximation by showing how well the uniform-pressure-change boundary condition is satisfied in a variety of cases. I also show that a vertical, oblate spheroid with a zero-length vertical axis is equivalent to the penny-shaped crack of Fialko [2001] in an infinite medium and an excellent approximation in a half-space. Finally, because, in many cases, volume change is more tangible than pressure change, I have derived an equation that relates these two quantities for the spheroid: volume change equals pressure change × 2/3 × π/μ × a constant that depends on Poisson's ratio and the spheroid geometry. Eshelby, J. D., The determination of the elastic field of an ellipsoidal inclusion and related problems, Proc. R. Soc. London, Ser. A, 241, 376-396, 1957. Fialko, Y., Khazan, Y, and Simons, M. Deformation due to a pressurized horizontal circular crack in an elastic half-space, with applications to volcano geodesy, Geophys. J. Int., no. 146, 181-190, 2001. Yang, X., Davis, P. M., and Dieterich, J.H, Deformation from inflation of a dipping finite prolate spheroid in an Elastic Half-Space as a model for volcanic stressing, J. Geophys. Res., vol. 93, no. B5, 4249-4257, 1988.
Local linear regression for function learning: an analysis based on sample discrepancy.
Cervellera, Cristiano; Macciò, Danilo
2014-11-01
Local linear regression models, a kind of nonparametric structures that locally perform a linear estimation of the target function, are analyzed in the context of empirical risk minimization (ERM) for function learning. The analysis is carried out with emphasis on geometric properties of the available data. In particular, the discrepancy of the observation points used both to build the local regression models and compute the empirical risk is considered. This allows to treat indifferently the case in which the samples come from a random external source and the one in which the input space can be freely explored. Both consistency of the ERM procedure and approximating capabilities of the estimator are analyzed, proving conditions to ensure convergence. Since the theoretical analysis shows that the estimation improves as the discrepancy of the observation points becomes smaller, low-discrepancy sequences, a family of sampling methods commonly employed for efficient numerical integration, are also analyzed. Simulation results involving two different examples of function learning are provided.
Generation of a Circumstellar Gas Disk by Hot Jupiter WASP-12b
NASA Astrophysics Data System (ADS)
Debrecht, Alex; Carroll-Nellenback, Jonathan; Frank, Adam; Fossati, Luca; Blackman, Eric G.; Dobbs-Dixon, Ian
2018-05-01
Observations of transiting extra-solar planets provide rich sources of data for probing the in-system environment. In the WASP-12 system, a broad depression in the usually-bright MgII h&k lines has been observed, in addition to atmospheric escape from the extremely hot Jupiter WASP-12b. It has been hypothesized that a translucent circumstellar cloud is formed by the outflow from the planet, causing the observed signatures. We perform 3D hydrodynamic simulations of the full system environment of WASP-12, injecting a planetary wind and stellar wind from their respective surfaces. We find that a torus of density high enough to account for the lack of MgII h&k line core emission in WASP-12 can be formed in approximately 13 years. We also perform synthetic observations of the Lyman-alpha spectrum at different points in the planet's orbit, which demonstrate that significant absorption occurs at all points in the orbit, not just during transits, as suggested by the observations.
Firth, Jacqueline; Balraj, Vinohar; Muliyil, Jayaprakash; Roy, Sheela; Rani, Lilly Michael; Chandresekhar, R.; Kang, Gagandeep
2010-01-01
To assess water contamination and the relative effectiveness of three options for point-of-use water treatment in South India, we conducted a 6-month randomized, controlled intervention trial using chlorine, Moringa oleifera seeds, a closed valved container, and controls. One hundred twenty-six families participated. Approximately 70% of public drinking water sources had thermotolerant coliform counts > 100/100 mL. Neither M. oleifera seeds nor containers reduced coliform counts in water samples from participants' homes. Chlorine reduced thermotolerant coliform counts to potable levels, but was less acceptable to participants. Laboratory testing of M. oleifera seeds in water from the village confirmed the lack of reduction in coliform counts, in contrast to the improvement seen with Escherichia coli seeded distilled water. This discrepancy merits further study, as M. oleifera was effective in reducing coliform counts in other studies and compliance with Moringa use in this study was high. PMID:20439952
The flux qubit revisited to enhance coherence and reproducibility
Yan, Fei; Gustavsson, Simon; Kamal, Archana; Birenbaum, Jeffrey; Sears, Adam P; Hover, David; Gudmundsen, Ted J.; Rosenberg, Danna; Samach, Gabriel; Weber, S; Yoder, Jonilyn L.; Orlando, Terry P.; Clarke, John; Kerman, Andrew J.; Oliver, William D.
2016-01-01
The scalable application of quantum information science will stand on reproducible and controllable high-coherence quantum bits (qubits). Here, we revisit the design and fabrication of the superconducting flux qubit, achieving a planar device with broad-frequency tunability, strong anharmonicity, high reproducibility and relaxation times in excess of 40 μs at its flux-insensitive point. Qubit relaxation times T1 across 22 qubits are consistently matched with a single model involving resonator loss, ohmic charge noise and 1/f-flux noise, a noise source previously considered primarily in the context of dephasing. We furthermore demonstrate that qubit dephasing at the flux-insensitive point is dominated by residual thermal-photons in the readout resonator. The resulting photon shot noise is mitigated using a dynamical decoupling protocol, resulting in T2≈85 μs, approximately the 2T1 limit. In addition to realizing an improved flux qubit, our results uniquely identify photon shot noise as limiting T2 in contemporary qubits based on transverse qubit–resonator interaction. PMID:27808092
Comment on "Fractional quantum mechanics" and "Fractional Schrödinger equation"
NASA Astrophysics Data System (ADS)
Wei, Yuchuan
2016-06-01
In this Comment we point out some shortcomings in two papers [N. Laskin, Phys. Rev. E 62, 3135 (2000), 10.1103/PhysRevE.62.3135; N. Laskin, Phys. Rev. E 66, 056108 (2002), 10.1103/PhysRevE.66.056108]. We prove that the fractional uncertainty relation does not hold generally. The probability continuity equation in fractional quantum mechanics has a missing source term, which leads to particle teleportation, i.e., a particle can teleport from a place to another. Since the relativistic kinetic energy can be viewed as an approximate realization of the fractional kinetic energy, the particle teleportation should be an observable relativistic effect in quantum mechanics. With the help of this concept, superconductivity could be viewed as the teleportation of electrons from one side of a superconductor to another and superfluidity could be viewed as the teleportation of helium atoms from one end of a capillary tube to the other. We also point out how to teleport a particle to an arbitrary destination.
Comment on "Fractional quantum mechanics" and "Fractional Schrödinger equation".
Wei, Yuchuan
2016-06-01
In this Comment we point out some shortcomings in two papers [N. Laskin, Phys. Rev. E 62, 3135 (2000)10.1103/PhysRevE.62.3135; N. Laskin, Phys. Rev. E 66, 056108 (2002)10.1103/PhysRevE.66.056108]. We prove that the fractional uncertainty relation does not hold generally. The probability continuity equation in fractional quantum mechanics has a missing source term, which leads to particle teleportation, i.e., a particle can teleport from a place to another. Since the relativistic kinetic energy can be viewed as an approximate realization of the fractional kinetic energy, the particle teleportation should be an observable relativistic effect in quantum mechanics. With the help of this concept, superconductivity could be viewed as the teleportation of electrons from one side of a superconductor to another and superfluidity could be viewed as the teleportation of helium atoms from one end of a capillary tube to the other. We also point out how to teleport a particle to an arbitrary destination.
NASA Astrophysics Data System (ADS)
Li, Qiangkun; Hu, Yawei; Jia, Qian; Song, Changji
2018-02-01
It is the key point of quantitative research on agricultural non-point source pollution load, the estimation of pollutant concentration in agricultural drain. In the guidance of uncertainty theory, the synthesis of fertilization and irrigation is used as an impulse input to the farmland, meanwhile, the pollutant concentration in agricultural drain is looked as the response process corresponding to the impulse input. The migration and transformation of pollutant in soil is expressed by Inverse Gaussian Probability Density Function. The law of pollutants migration and transformation in soil at crop different growth periods is reflected by adjusting parameters of Inverse Gaussian Distribution. Based on above, the estimation model for pollutant concentration in agricultural drain at field scale was constructed. Taking the of Qing Tong Xia Irrigation District in Ningxia as an example, the concentration of nitrate nitrogen and total phosphorus in agricultural drain was simulated by this model. The results show that the simulated results accorded with measured data approximately and Nash-Sutcliffe coefficients were 0.972 and 0.964, respectively.
Preliminary Geological Findings on the BP-1 Simulant
NASA Technical Reports Server (NTRS)
Stoeser, D. B.; Rickman, D. L.; Wilson, S.
2010-01-01
A waste material from an aggregate producing quarry has been used to make an inexpensive lunar simulant called BP-1. The feedstock is the Black Point lava flow in northern Arizona. Although this is part of the San Francisco volcanic field, which is also the source of the JSC-1 series feedstock, BP-1 and JSC-1 are distinct. Chemically, the Black Point flow is an amygdaloidal nepheline-bearing basalt. The amygdules are filled with secondary minerals containing opaline silica, calcium carbonate, and ferric iron minerals. X-ray diffraction (XRD) detected approximately 3% quartz, which is in line with tests done by the Kennedy Space Center Industrial Hygiene Office. Users of this material should use appropriate protective equipment. XRD also showed the presence of significant halite and some bassanite. Both are interpreted to be evaporative residues due to recycling of wash water at the quarry. The size distribution of BP-1 may be superior to some other simulants for some applications.
Firth, Jacqueline; Balraj, Vinohar; Muliyil, Jayaprakash; Roy, Sheela; Rani, Lilly Michael; Chandresekhar, R; Kang, Gagandeep
2010-05-01
To assess water contamination and the relative effectiveness of three options for point-of-use water treatment in South India, we conducted a 6-month randomized, controlled intervention trial using chlorine, Moringa oleifera seeds, a closed valved container, and controls. One hundred twenty-six families participated. Approximately 70% of public drinking water sources had thermotolerant coliform counts > 100/100 mL. Neither M. oleifera seeds nor containers reduced coliform counts in water samples from participants' homes. Chlorine reduced thermotolerant coliform counts to potable levels, but was less acceptable to participants. Laboratory testing of M. oleifera seeds in water from the village confirmed the lack of reduction in coliform counts, in contrast to the improvement seen with Escherichia coli seeded distilled water. This discrepancy merits further study, as M. oleifera was effective in reducing coliform counts in other studies and compliance with Moringa use in this study was high.
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1989-01-01
The linearized mean-force-field approximation, leading to a Gaussian distribution, provides an exact formal solution to the mean-spherical integral equation model for the electric microfield distribution at a charged point in the general charged-hard-particles fluid. Lado's explicit solution for plasmas immediately follows this general observation.
Chiral interface at the finite temperature transition point of QCD
NASA Technical Reports Server (NTRS)
Frei, Z.; Patkos, A.
1990-01-01
The domain wall between coexisting chirally symmetric and broken symmetry regions is studied in a saddle point approximation to the effective three-flavor sigma model. In the chiral limit the surface tension varies in the range ((40 to -50)MeV)(exp 3). The width of the domain wall is estimated to be approximately or equal to 4.5 fm.
NASA Technical Reports Server (NTRS)
Freesland, Doug; Carter, Delano; Chapel, Jim; Clapp, Brian; Howat, John; Krimchansky, Alexander
2015-01-01
The Geostationary Operational Environmental Satellite-R Series (GOES-R) is the first of the next generation geostationary weather satellites, scheduled for delivery in late 2015. GOES-R represents a quantum increase in Earth and solar weather observation capabilities, with 4 times the resolution, 5 times the observation rate, and 3 times the number of spectral bands for Earth observations. With the improved resolution, comes the instrument suite's increased sensitive to disturbances over a broad spectrum 0-512 Hz. Sources of disturbance include reaction wheels, thruster firings for station keeping and momentum management, gimbal motion, and internal instrument disturbances. To minimize the impact of these disturbances, the baseline design includes an Earth Pointed Platform (EPP), a stiff optical bench to which the two nadir pointed instruments are collocated together with the Guidance Navigation & Control (GN&C) star trackers and Inertial Measurement Units (IMUs). The EPP is passively isolated from the spacecraft bus with Honeywell D-Strut isolators providing attenuation for frequencies above approximately 5 Hz in all six degrees-of-freedom. A change in Reaction Wheel Assembly (RWA) vendors occurred very late in the program. To reduce the risk of RWA disturbances impacting performance, a secondary passive isolation system manufactured by Moog CSA Engineering was incorporated under each of the six 160 Nms RWAs, tuned to provide attenuation at frequencies above approximately 50 Hz. Integrated wheel and isolator testing was performed on a Kistler table at NASA Goddard Space Flight Center. High fidelity simulations were conducted to evaluate jitter performance for four topologies: 1) hard mounted no isolation, 2) EPP isolation only, 2) RWA isolation only, and 4) dual isolation. Simulation results demonstrate excellent performance relative to the pointing stability requirements, with dual isolated Line of Sight (LOS) jitter less than 1 micron rad.
NASA Astrophysics Data System (ADS)
Nettke, Will; Scott, Douglas; Gibb, Andy G.; Thompson, Mark; Chrysostomou, Antonio; Evans, A.; Hill, Tracey; Jenness, Tim; Joncas, Gilles; Moore, Toby; Serjeant, Stephen; Urquhart, James; Vaccari, Mattia; Weferling, Bernd; White, Glenn; Zhu, Ming
2017-06-01
The SCUBA-2 Ambitious Sky Survey (SASSy) is composed of shallow 850-μm imaging using the Submillimetre Common-User Bolometer Array 2 (SCUBA-2) on the James Clerk Maxwell Telescope. Here we describe the extraction of a catalogue of beam-sized sources from a roughly 120 deg2 region of the Galactic plane mapped uniformly (to an rms level of about 40 mJy), covering longitude 120° < l < 140° and latitude |b| < 2.9°. We used a matched-filtering approach to increase the signal-to-noise ratio (S/N) in these noisy maps and tested the efficiency of our extraction procedure through estimates of the false discovery rate, as well as by adding artificial sources to the real images. The primary catalogue contains a total of 189 sources at 850 μm, down to an S/N threshold of approximately 4.6. Additionally, we list 136 sources detected down to S/N = 4.3, but recognize that as we go lower in S/N, the reliability of the catalogue rapidly diminishes. We perform follow-up observations of some of our lower significance sources through small targeted SCUBA-2 images and list 265 sources detected in these maps down to S/N = 5. This illustrates the real power of SASSy: inspecting the shallow maps for regions of 850-μm emission and then using deeper targeted images to efficiently find fainter sources. We also perform a comparison of the SASSy sources with the Planck Catalogue of Compact Sources and the IRAS Point Source Catalogue, to determine which sources discovered in this field might be new, and hence potentially cold regions at an early stage of star formation.
NASA Astrophysics Data System (ADS)
Lubow, S.; Budavári, T.
2013-10-01
We have created an initial catalog of objects observed by the WFPC2 and ACS instruments on the Hubble Space Telescope (HST). The catalog is based on observations taken on more than 6000 visits (telescope pointings) of ACS/WFC and more than 25000 visits of WFPC2. The catalog is obtained by cross matching by position in the sky all Hubble Legacy Archive (HLA) Source Extractor source lists for these instruments. The source lists describe properties of source detections within a visit. The calculations are performed on a SQL Server database system. First we collect overlapping images into groups, e.g., Eta Car, and determine nearby (approximately matching) pairs of sources from different images within each group. We then apply a novel algorithm for improving the cross matching of pairs of sources by adjusting the astrometry of the images. Next, we combine pairwise matches into maximal sets of possible multi-source matches. We apply a greedy Bayesian method to split the maximal matches into more reliable matches. We test the accuracy of the matches by comparing the fluxes of the matched sources. The result is a set of information that ties together multiple observations of the same object. A byproduct of the catalog is greatly improved relative astrometry for many of the HST images. We also provide information on nondetections that can be used to determine dropouts. With the catalog, for the first time, one can carry out time domain, multi-wavelength studies across a large set of HST data. The catalog is publicly available. Much more can be done to expand the catalog capabilities.
Asymptotic safety of quantum gravity beyond Ricci scalars
NASA Astrophysics Data System (ADS)
Falls, Kevin; King, Callum R.; Litim, Daniel F.; Nikolakopoulos, Kostas; Rahmede, Christoph
2018-04-01
We investigate the asymptotic safety conjecture for quantum gravity including curvature invariants beyond Ricci scalars. Our strategy is put to work for families of gravitational actions which depend on functions of the Ricci scalar, the Ricci tensor, and products thereof. Combining functional renormalization with high order polynomial approximations and full numerical integration we derive the renormalization group flow for all couplings and analyse their fixed points, scaling exponents, and the fixed point effective action as a function of the background Ricci curvature. The theory is characterized by three relevant couplings. Higher-dimensional couplings show near-Gaussian scaling with increasing canonical mass dimension. We find that Ricci tensor invariants stabilize the UV fixed point and lead to a rapid convergence of polynomial approximations. We apply our results to models for cosmology and establish that the gravitational fixed point admits inflationary solutions. We also compare findings with those from f (R ) -type theories in the same approximation and pin-point the key new effects due to Ricci tensor interactions. Implications for the asymptotic safety conjecture of gravity are indicated.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
NASA Astrophysics Data System (ADS)
Yi, Jin; Li, Xinyu; Xiao, Mi; Xu, Junnan; Zhang, Lin
2017-01-01
Engineering design often involves different types of simulation, which results in expensive computational costs. Variable fidelity approximation-based design optimization approaches can realize effective simulation and efficiency optimization of the design space using approximation models with different levels of fidelity and have been widely used in different fields. As the foundations of variable fidelity approximation models, the selection of sample points of variable-fidelity approximation, called nested designs, is essential. In this article a novel nested maximin Latin hypercube design is constructed based on successive local enumeration and a modified novel global harmony search algorithm. In the proposed nested designs, successive local enumeration is employed to select sample points for a low-fidelity model, whereas the modified novel global harmony search algorithm is employed to select sample points for a high-fidelity model. A comparative study with multiple criteria and an engineering application are employed to verify the efficiency of the proposed nested designs approach.
Data approximation using a blending type spline construction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmo, Rune; Bratlie, Jostein
2014-11-18
Generalized expo-rational B-splines (GERBS) is a blending type spline construction where local functions at each knot are blended together by C{sup k}-smooth basis functions. One way of approximating discrete regular data using GERBS is by partitioning the data set into subsets and fit a local function to each subset. Partitioning and fitting strategies can be devised such that important or interesting data points are interpolated in order to preserve certain features. We present a method for fitting discrete data using a tensor product GERBS construction. The method is based on detection of feature points using differential geometry. Derivatives, which aremore » necessary for feature point detection and used to construct local surface patches, are approximated from the discrete data using finite differences.« less
Singular reduction of resonant Hamiltonians
NASA Astrophysics Data System (ADS)
Meyer, Kenneth R.; Palacián, Jesús F.; Yanguas, Patricia
2018-06-01
We investigate the dynamics of resonant Hamiltonians with n degrees of freedom to which we attach a small perturbation. Our study is based on the geometric interpretation of singular reduction theory. The flow of the Hamiltonian vector field is reconstructed from the cross sections corresponding to an approximation of this vector field in an energy surface. This approximate system is also built using normal forms and applying reduction theory obtaining the reduced Hamiltonian that is defined on the orbit space. Generically, the reduction is of singular character and we classify the singularities in the orbit space, getting three different types of singular points. A critical point of the reduced Hamiltonian corresponds to a family of periodic solutions in the full system whose characteristic multipliers are approximated accordingly to the nature of the critical point.
Coorbital Collision as the Energy Source for Enceladus' Plumes
NASA Astrophysics Data System (ADS)
Peale, Stanton J.; Greenberg, R.
2009-09-01
A collision of a coorbiting satellite with Enceladus is proposed as the source of energy to power the observed plumes emanating from the south pole of the satellite. A coorbital would have impacted at a velocity only slightly above the escape velocity of Enceladus, which would likely be necessary to keep the collision gentle enough not to disrupt the old cratered terrain nearby. If the mass were 1% of Enceladus', the energy deposited can sustain the plumes for approximately 80,000 to 200,000 years at the estimated observed power of 6 to 15 GW, so the impact would have been quite recent. The collision at an arbitrary point would leave Enceladus with non-synchronous, non-principal-axis rotation and a significant obliquity. After subsuming the impactor's volume, the region around the impact point will have expanded in a manner consistent with the observed tectonic pattern. The ring-like expansion implied by the radial cracks suggests that the new principal axis of maximum moment of inertia could have passed through the impact point. Internal dissipation from precession of the spin axis about the axis of maximum moment of inertia in the body frame of reference and from tides raised on Enceladus cause the axes of spin and of maximum moment to converge as the spin is brought to a zero obliquity and synchronous rotation on a time scale that is extremely short compared to the lifetime of the plumes. Hence, the region of collision, which is hot, ends up at one of the poles where we find the plumes.
75 FR 77762 - Final Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-14
... Flooding source(s) elevation in feet above Communities affected ground [caret] Elevation in meters (MSL... South Monroe Village. Lane. Just upstream of High +5,491 Line Canal. Box Elder Creek Approximately 1,400...,773 downstream of East Arapahoe Road. Goldsmith Gulch West Tributary......... Approximately 400 feet...
Modeling diffuse phosphorus emissions to assist in best management practice designing
NASA Astrophysics Data System (ADS)
Kovacs, Adam; Zessner, Matthias; Honti, Mark; Clement, Adrienne
2010-05-01
A diffuse emission modeling tool has been developed, which is appropriate to support decision-making in watershed management. The PhosFate (Phosphorus Fate) tool allows planning best management practices (BMPs) in catchments and simulating their possible impacts on the phosphorus (P) loads. PhosFate is a simple fate model to calculate diffuse P emissions and their transport within a catchment. The model is a semi-empirical, catchment scale, distributed parameter and long-term (annual) average model. It has two main parts: (a) the emission and (b) the transport model. The main input data of the model are digital maps (elevation, soil types and landuse categories), statistical data (crop yields, animal numbers, fertilizer amounts and precipitation distribution) and point information (precipitation, meteorology, soil humus content, point source emissions and reservoir data). The emission model calculates the diffuse P emissions at their source. It computes the basic elements of the hydrology as well as the soil loss. The model determines the accumulated P surplus of the topsoil and distinguishes the dissolved and the particulate P forms. Emissions are calculated according to the different pathways (surface runoff, erosion and leaching). The main outputs are the spatial distribution (cell values) of the runoff components, the soil loss and the P emissions within the catchment. The transport model joins the independent cells based on the flow tree and it follows the further fate of emitted P from each cell to the catchment outlets. Surface runoff and P fluxes are accumulated along the tree and the field and in-stream retention of the particulate forms are computed. In case of base flow and subsurface P loads only the channel transport is taken into account due to the less known hydrogeological conditions. During the channel transport, point sources and reservoirs are also considered. Main results of the transport algorithm are the discharge, dissolved and sediment-bounded P load values at any arbitrary point within the catchment. Finally, a simple design procedure has been built up to plan BMPs in the catchments and simulate their possible impacts on diffuse P fluxes as well as calculate their approximately costs. Both source and transport controlling measures have been involved into the planning procedure. The model also allows examining the impacts of alterations of fertilizer application, point source emissions as well as the climate change on the river loads. Besides this, a simple optimization algorithm has been developed to select the most effective source areas (real hot spots), which should be targeted by the interventions. The fate model performed well in Hungarian pilot catchments. Using the calibrated and validated model, different management scenarios were worked out and their effects and costs evaluated and compared to each other. The results show that the approach is suitable to effectively design BMP measures at local scale. Combinative application of the source and transport controlling BMPs can result in high P reduction efficiency. Optimization of the interventions can remarkably reduce the area demand of the necessary BMPs, consequently the establishment costs can be decreased. The model can be coupled with a larger scale catchment model to form a "screening and planning" modeling system.
Rezvani, Alireza; Khalili, Abbas; Mazareie, Alireza; Gandomkar, Majid
2016-07-01
Nowadays, photovoltaic (PV) generation is growing increasingly fast as a renewable energy source. Nevertheless, the drawback of the PV system is its dependence on weather conditions. Therefore, battery energy storage (BES) can be considered to assist for a stable and reliable output from PV generation system for loads and improve the dynamic performance of the whole generation system in grid connected mode. In this paper, a novel topology of intelligent hybrid generation systems with PV and BES in a DC-coupled structure is presented. Each photovoltaic cell has a specific point named maximum power point on its operational curve (i.e. current-voltage or power-voltage curve) in which it can generate maximum power. Irradiance and temperature changes affect these operational curves. Therefore, the nonlinear characteristic of maximum power point to environment has caused to development of different maximum power point tracking techniques. In order to capture the maximum power point (MPP), a hybrid fuzzy-neural maximum power point tracking (MPPT) method is applied in the PV system. Obtained results represent the effectiveness and superiority of the proposed method, and the average tracking efficiency of the hybrid fuzzy-neural is incremented by approximately two percentage points in comparison to the conventional methods. It has the advantages of robustness, fast response and good performance. A detailed mathematical model and a control approach of a three-phase grid-connected intelligent hybrid system have been proposed using Matlab/Simulink. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Economic Expansion Is a Major Determinant of Physician Supply and Utilization
Cooper, Richard A; Getzen, Thomas E; Laud, Prakash
2003-01-01
Objective To assess the relationship between levels of economic development and the supply and utilization of physicians. Data Sources Data were obtained from the American Medical Association, American Osteopathic Association, Organization for Economic Cooperation and Development (OECD), Bureau of Health Professions, Bureau of Labor Statistics, Bureau of Economic Analysis, Census Bureau, Health Care Financing Administration, and historical sources. Study Design Economic development, expressed as real per capita gross domestic product (GDP) or personal income, was correlated with per capita health care labor and physician supply within countries and states over periods of time spanning 25–70 years and across countries, states, and metropolitan statistical areas (MSAs) at multiple points in time over periods of up to 30 years. Longitudinal data were analyzed in four complementary ways: (1) simple univariate regressions; (2) regressions in which temporal trends were partialled out; (3) time series comparing percentage differences across segments of time; and (4) a bivariate Granger causality test. Cross-sectional data were assessed at multiple time points by means of univariate regression analyses. Principal Findings Under each analytic scenario, physician supply correlated with differences in GDP or personal income. Longitudinal correlations were associated with temporal lags of approximately 5 years for health employment and 10 years for changes in physician supply. The magnitude of changes in per capita physician supply in the United States was equivalent to differences of approximately 0.75 percent for each 1.0 percent difference in GDP. The greatest effects of economic expansion were on the medical specialties, whereas the surgical and hospital-based specialties were affected to a lesser degree, and levels of economic expansion had little influence on family/general practice. Conclusions Economic expansion has a strong, lagged relationship with changes in physician supply. This suggests that economic projections could serve as a gauge for projecting the future utilization of physician services. PMID:12785567
SkICAT: A cataloging and analysis tool for wide field imaging surveys
NASA Technical Reports Server (NTRS)
Weir, N.; Fayyad, U. M.; Djorgovski, S. G.; Roden, J.
1992-01-01
We describe an integrated system, SkICAT (Sky Image Cataloging and Analysis Tool), for the automated reduction and analysis of the Palomar Observatory-ST ScI Digitized Sky Survey. The Survey will consist of the complete digitization of the photographic Second Palomar Observatory Sky Survey (POSS-II) in three bands, comprising nearly three Terabytes of pixel data. SkICAT applies a combination of existing packages, including FOCAS for basic image detection and measurement and SAS for database management, as well as custom software, to the task of managing this wealth of data. One of the most novel aspects of the system is its method of object classification. Using state-of-theart machine learning classification techniques (GID3* and O-BTree), we have developed a powerful method for automatically distinguishing point sources from non-point sources and artifacts, achieving comparably accurate discrimination a full magnitude fainter than in previous Schmidt plate surveys. The learning algorithms produce decision trees for classification by examining instances of objects classified by eye on both plate and higher quality CCD data. The same techniques will be applied to perform higher-level object classification (e.g., of galaxy morphology) in the near future. Another key feature of the system is the facility to integrate the catalogs from multiple plates (and portions thereof) to construct a single catalog of uniform calibration and quality down to the faintest limits of the survey. SkICAT also provides a variety of data analysis and exploration tools for the scientific utilization of the resulting catalogs. We include initial results of applying this system to measure the counts and distribution of galaxies in two bands down to Bj is approximately 21 mag over an approximate 70 square degree multi-plate field from POSS-II. SkICAT is constructed in a modular and general fashion and should be readily adaptable to other large-scale imaging surveys.
Tu, Ngu; King, Janet C; Dirren, Henri; Thu, Hoang Nga; Ngoc, Quyen Phi; Diep, Anh Nguyen Thi
2014-12-01
Maternal nutritional status is an important predictor of infant birthweight. Most previous attempts to improve birthweight through multiple micronutrient supplementation have been initiated after women are pregnant. Interventions to improve maternal nutritional status prior to conception may be more effective in preventing low birthweight and improving other infant health outcomes. To compare the effects of maternal supplementation with animal-source food from preconception to term or from mid-gestation to term with routine prenatal care on birthweight, the prevalence of preterm births, intrauterine growth restriction, and infant growth during the first 12 months of life and on maternal nutrient status and the incidence of maternal and infant infections. Young women from 29 rural communes in northwestern Vietnam were recruited when they registered to marry and were randomized to one of three interventions: animal-source food supplement 5 days per week from marriage to term (approximately 13 months), animal-source food supplement 5 days per week from 16 weeks of gestation to term (approximately 5 months), or routine prenatal care without supplementalfeeding. Data on infant birthweight and gestational age, maternal and infant anthropometry, micronutrient status, and infections in the infant and mother were collected at various time points. In a preliminary study of women of reproductive age in this area of Vietnam, 40% of the women were underweight (body mass index < 18.5) and anemic. About 50% had infections. Rice was the dietary staple, and nutrient-rich, animal-source foods were rarely consumed by women. Iron, zinc, vitamin A, folate, and vitamin B12 intakes were inadequate in about 40% of the women. The study is still ongoing, and further data are not yet available. The results of this study will provide important data regarding whether improved intake of micronutrient-rich animal-source foods that are locally available and affordable before and during pregnancy improves maternal and infant health and development. This food-based approach may have global implications regarding how and when to initiate sustainable nutritional interventions to improve maternal and infant health.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.
Atmospheric aerosol composition and source apportionments to aerosol in southern Taiwan
NASA Astrophysics Data System (ADS)
Tsai, Ying I.; Chen, Chien-Lung
In this study, the chemical characteristics of winter aerosol at four sites in southern Taiwan were determined and the Gaussian Trajectory transfer coefficient model (GTx) was then used to identify the major air pollutant sources affecting the study sites. Aerosols were found to be acidic at all four sites. The most important constituents of the particulate matter (PM) by mass were SO 42-, organic carbon (OC), NO 3-, elemental carbon (EC) and NH 4+, with SO 42-, NO 3-, and NH 4+ together constituting 86.0-87.9% of the total PM 2.5 soluble inorganic salts and 68.9-78.3% of the total PM 2.5-10 soluble inorganic salts, showing that secondary photochemical solution components such as these were the major contributors to the aerosol water-soluble ions. The coastal site, Linyuan (LY), had the highest PM mass percentage of sea salts, higher in the coarse fraction, and higher sea salts during daytime than during nighttime, indicating that the prevailing daytime sea breeze brought with it more sea-salt aerosol. Other than sea salts, crustal matter, and EC in PM 2.5 at Jenwu (JW) and in PM 2.5-10 at LY, all aerosol components were higher during nighttime, due to relatively low nighttime mixing heights limiting vertical and horizontal dispersion. At JW, a site with heavy traffic loadings, the OC/EC ratio in the nighttime fine and coarse fractions of approximately 2.2 was higher than during daytime, indicating that in addition to primary organic aerosol (POA), secondary organic aerosol (SOA) also contributed to the nighttime PM 2.5. This was also true of the nighttime coarse fraction at LY. The GTx produced correlation coefficients ( r) for simulated and observed daily concentrations of PM 10 at the four sites (receptors) in the range 0.45-0.59 and biases from -6% to -20%. Source apportionment indicated that point sources were the largest PM 10 source at JW, LY and Daliao (DL), while at Meinung (MN), a suburban site with less local PM 10, SO x and NO x emissions, upwind boundary concentration was the major PM 10 source, followed by point sources and top boundary concentration.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
Brown, Craig J.; Trombley, Thomas J.
2009-01-01
The 258 organic compounds studied in this U.S. Geological Survey (USGS) assessment generally are man-made, including pesticides, solvents, gasoline hydrocarbons, personal-care and domestic-use products, and pavement and combustion-derived compounds. Of these 258 compounds, 26 (about 10 percent) were detected at least once among the 31 samples collected approximately monthly during 2003-05 at the intake of a flowthrough reservoir on Running Gutter Brook in Massachusetts, one of several community water systems on tributaries of the Connecticut River. About 81 percent of the watershed is forested, 14 percent is agricultural land, and 5 percent is urban land. In most source-water samples collected at Running Gutter Brook, fewer compounds were detected and their concentrations were low (less than 0.1 micrograms per liter) when compared with compounds detected at other stream sites across the country that drain watersheds that have a larger percentage of agricultural and urban areas. The relatively few compounds detected at low concentrations reflect the largely undeveloped land use at Running Gutter Brook. Despite the absence of wastewater discharge points on the stream, however, the compounds that were detected could indicate different sources and uses (point sources, precipitation, domestic, and agricultural) and different pathways to drinking-water supplies (overland runoff, groundwater discharge, leaking of treated water from distribution lines, and formation during treatment). Six of the 10 compounds detected most commonly (in at least 20 percent of the samples) in source water also were detected commonly in finished water (after treatment but prior to distribution). Concentrations in source and finished water generally were below 0.1 micrograms per liter and always less than humanhealth benchmarks, which are available for about one-half of the compounds detected. On the basis of this screening-level assessment, adverse effects to human health are expected to be negligible (subject to limitations of available humanhealth benchmarks).
NASA Astrophysics Data System (ADS)
Gupta, I.; Chan, W.; Wagner, R.
2005-12-01
Several recent studies of the generation of low-frequency Lg from explosions indicate that the Lg wavetrain from explosions contains significant contributions from (1) the scattering of explosion-generated Rg into S and (2) direct S waves from the non-spherical spall source associated with a buried explosion. The pronounced spectral nulls observed in Lg spectra of Yucca Flats (NTS) and Semipalatinsk explosions (Patton and Taylor, 1995; Gupta et al., 1997) are related to Rg excitation caused by spall-related block motions in a conical volume over the shot point, which may be approximately represented by a compensated linear vector dipole (CLVD) source (Patton et al., 2005). Frequency-dependent excitation of Rg waves should be imprinted on all scattered P, S and Lg waves. A spectrogram may be considered as a three-dimensional matrix of numbers providing amplitude and frequency information for each point in the time series. We found difference spectrograms, derived from a normal explosion and a closely located over-buried shot recorded at the same common station, to be remarkably useful for an understanding of the origin and spectral contents of various regional phases. This technique allows isolation of source characteristics, essentially free from path and recording site effects, since the overburied shot acts as the empirical Green's function. Application of this methodology to several pairs of closely located explosions shows that the scattering of explosion-generated Rg makes significant contribution to not only Lg and its coda but also to the two other regional phases Pg (presumably by the scattering of Rg into P) and Sn. The scattered energy, identified by the presence of a spectral null at the appropriate frequency, generally appears to be more prominent in the somewhat later-arriving sections of Pg, Sn, and Lg than in the initial part. Difference spectrograms appear to provide a powerful new technique for understanding the mechanism of near-source scattering of explosion-generated Rg and its contribution to various regional phases.
Operation and development status of the J-PARC ion source
NASA Astrophysics Data System (ADS)
Yamazaki, S.; Ikegami, K.; Ohkoshi, K.; Ueno, A.; Koizumi, I.; Takagi, A.; Oguri, H.
2014-02-01
A cesium-free H- ion source driven with a LaB6 filament is being operated at the Japan Proton Accelerator Research Complex (J-PARC) without any serious trouble since the restoration from the March 2011 earthquake. The H- ion current from the ion source is routinely restricted approximately 19 mA for the lifetime of the filament. In order to increase the beam power at the linac beam operation (January to February 2013), the beam current from the ion source was increased to 22 mA. At this operation, the lifetime of the filament was estimated by the reduction in the filament current. According to the steep reduction in the filament current, the break of the filament was predicted. Although the filament has broken after approximately 10 h from the steep current reduction, the beam operation was restarted approximately 8 h later by the preparation for the exchange of new filament. At the study time for the 3 GeV rapid cycling synchrotron (April 2013), the ion source was operated at approximately 30 mA for 8 days. As a part of the beam current upgrade plan for the J-PARC, the front end test stand consisting of the ion source and the radio frequency quadrupole is under preparation. The RF-driven H- ion source developed for the J-PARC 2nd stage requirements will be tested at this test stand.
An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.
Singh, Parth Raj; Wang, Yide; Chargé, Pascal
2017-03-30
In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.
NASA Technical Reports Server (NTRS)
Woods, P. M.; Kouveliotou, C.; Finger, M. H.; Gogus, E.; Swank, J.; Markwardt, C.; Strohmayer, T.; Six, N. Frank (Technical Monitor)
2002-01-01
Goddard Space Flight Center reports the serendipitous discovery of a new x-ray transient, XTE J1908+094, in RXTE (Rossi X-ray Timing Explorer) PCA (Proportional Counter Array) observations of the soft-gamma-ray repeater SGR 1900+14, triggered following the burst activity on Feb. 17-18 (GCN 1253). These observations failed to detect the 5.2-s SGR pulsations, pointing towards a possible new source as the origin of the high x-ray flux. An RXTE PCA scan of the region around SGR 1900+14 on Feb. 21 was consistent with emission only from known sources (and no new sources). However, the scans required SGR 1900+14 to be 20 times brighter than its quiescent flux level (GCN 1256). A Director's Discretionary Time Chandra observation on Mar. 11 showed that the SGR was quiescent and did not reveal any new source within the Chandra ACIS (Advanced CCD (charge coupled device) Imaging Spectrometer) field-of-view. A subsequent RXTE PCA scan on Mar. 17, taken in combination with the first scan, required that a new source be included in the fit. The best-fit position is R.A. 19h 08m 50s, Decl. = +9 22 deg .5 (equinox J2000.0; estimated 2 deg systematic error radius), or approximately 24 deg away from the SGR source. The source spectrum (2-30 kev) can be best fit with a power-law function including photoelectric absorption (column density N_h = 2.3 x 10(exp 22), photon index = 1.55). Iron line emission is present, but may be due to the Galactic ridge. Between Feb. 19 and Mar. 17, the source flux (2-10 keV) has risen from 26 to 64 mCrab. The power spectrum is flat between 1 mHz and 0.1 Hz, falling approximately as 1/f**0.5 up to 1 Hz. At 1 Hz is seen a broad quasiperiodic oscillation peak and a break to a 1/f**2 power law, which continues to 4 Hz. The fractional rms (root mean square) amplitude from 1 mHz to 4 Hz is 43 percent. No coherent pulsations are seen between 0.001 and 1024 Hz. The authors conclude that XTE J1908+094 is a new blackhole candidate.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
Asteroid Size-Frequency Distribution
NASA Technical Reports Server (NTRS)
Tedesco, Edward F.
2001-01-01
A total of six deep exposures (using AOT CAM01 with a 6 inch PFOV) through the ISOCAM LW10 filter (IRAS Band 1, i.e. 12 micron) were obtained on an approximately 15 arcminute square field centered on the ecliptic plane. Point sources were extracted using the technique described. Two known asteroids appear in these frames and 20 sources moving with velocities appropriate for main belt asteroids are present. Most of the asteroids detected have flux densities less than 1 mJy, i,e., between 150 and 350 times fainter than any of the asteroids observed by IRAS. These data provide the first direct measurement of the 12 pm sky-plane density for asteroids on the ecliptic equator. The median zodiacal foreground, as measured by ISOCAM during this survey, is found to be 22.1 +/- 1.5 mJy per pixel, i.e., 26.2 +/- 1.7 MJy/sr. The results presented here imply that the actual number of kilometer-sized asteroids is significantly greater than previously believed and in reasonable agreement with the Statistical Asteroid Model.
Vertex Operators, Grassmannians, and Hilbert Schemes
NASA Astrophysics Data System (ADS)
Carlsson, Erik
2010-12-01
We approximate the infinite Grassmannian by finite-dimensional cutoffs, and define a family of fermionic vertex operators as the limit of geometric correspondences on the equivariant cohomology groups, with respect to a one-dimensional torus action. We prove that in the localization basis, these are the well-known fermionic vertex operators on the infinite wedge representation. Furthermore, the boson-fermion correspondence, locality, and intertwining properties with the Virasoro algebra are the limits of relations on the finite-dimensional cutoff spaces, which are true for geometric reasons. We then show that these operators are also, almost by definition, the vertex operators defined by Okounkov and the author in Carlsson and Okounkov (
History of mercury use and environmental contamination at the Oak Ridge Y-12 Plant.
Brooks, Scott C; Southworth, George R
2011-01-01
Between 1950 and 1963 approximately 11 million kilograms of mercury (Hg) were used at the Oak Ridge Y-12 National Security Complex (Y-12 NSC) for lithium isotope separation processes. About 3% of the Hg was lost to the air, soil and rock under facilities, and East Fork Poplar Creek (EFPC) which originates in the plant site. Smaller amounts of Hg were used at other Oak Ridge facilities with similar results. Although the primary Hg discharges from Y-12 NSC stopped in 1963, small amounts of Hg continue to be released into the creek from point sources and diffuse contaminated soil and groundwater sources within Y-12 NSC. Mercury concentration in EFPC has decreased 85% from ∼2000 ng/L in the 1980s. In general, methylmercury concentrations in water and in fish have not declined in response to improvements in water quality and exhibit trends of increasing concentration in some cases. Published by Elsevier Ltd.
Phase imaging using highly coherent X-rays: radiography, tomography, diffraction topography.
Baruchel, J; Cloetens, P; Härtwig, J; Ludwig, W; Mancini, L; Pernot, P; Schlenker, M
2000-05-01
Several hard X-rays imaging techniques greatly benefit from the coherence of the beams delivered by the modern synchrotron radiation sources. This is illustrated with examples recorded on the 'long' (145 m) ID19 'imaging' beamline of the ESRF. Phase imaging is directly related to the small angular size of the source as seen from one point of the sample ('effective divergence' approximately microradians). When using the ;propagation' technique, phase radiography and tomography are instrumentally very simple. They are often used in the 'edge detection' regime, where the jumps of density are clearly observed. The in situ damage assessment of micro-heterogeneous materials is one example of the many applications. Recently a more quantitative approach has been developed, which provides a three-dimensional density mapping of the sample ('holotomography'). The combination of diffraction topography and phase-contrast imaging constitutes a powerful tool. The observation of holes of discrete sizes in quasicrystals, and the investigation of poled ferroelectric materials, result from this combination.
A Possible Magnetar Nature for IGR J16358-4726
NASA Technical Reports Server (NTRS)
Patel, S.; Zurita, J.; DelSanto, M.; Finger, M.; Koueliotou, C.; Eichler, D.; Gogus, E.; Ubertini, P.; Walter, R.; Woods, P.
2006-01-01
We present detailed spectral and timing analysis of the hard x-ray transient IGR J16358-4726 using multi-satellite archival observations. A study of the source flux time history over 6 years, suggests that this transient outbursts can be occurring in intervals of at most 1 year. Joint spectral fits using simultaneous Chandra/ACIS and INTEGRAL/ISGRI data reveal a spectrum well described by an absorbed cut-off power law model plus an Fe line. We detected the pulsations initially reported using Chandra/ACIS also in the INTEGRAL/ISGRI light curve and in subsequent XMM-Newton observations. Using the INTEGRAL data we identified a pulse spin up of 94 s (P = 1.6 x 10(exp -4), which strongly points to a neutron star nature for IGR J16358-4726. Assuming that the spin up is due to disc accretion, we estimate that the source magnetic field ranges between 10(sup 13) approximately 10(sup 15) depending on its distance, possibly supporting a magnetar nature for IGR J16358-4726.
NASA Astrophysics Data System (ADS)
Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes
2017-04-01
In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.
Spline approximation, Part 1: Basic methodology
NASA Astrophysics Data System (ADS)
Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar
2018-04-01
In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.
Wang, Jie; Liu, Guijian; Liu, Houqi; Lam, Paul K S
2017-04-01
A total of 211 water samples were collected from 53 key sampling points from 5-10th July 2013 at four different depths (0m, 2m, 4m, 8m) and at different sites in the Huaihe River, Anhui, China. These points monitored for 18 parameters (water temperature, pH, TN, TP, TOC, Cu, Pb, Zn, Ni, Co, Cr, Cd, Mn, B, Fe, Al, Mg, and Ba). The spatial variability, contamination sources and health risk of trace elements as well as the river water quality were investigated. Our results were compared with national (CSEPA) and international (WHO, USEPA) drinking water guidelines, revealing that Zn, Cd and Pb were the dominant pollutants in the water body. Application of different multivariate statistical approaches, including correlation matrix and factor/principal component analysis (FA/PCA), to assess the origins of the elements in the Huaihe River, identified three source types that accounted for 79.31% of the total variance. Anthropogenic activities were considered to contribute much of the Zn, Cd, Pb, Ni, Co, and Mn via industrial waste, coal combustion, and vehicle exhaust; Ba, B, Cr and Cu were controlled by mixed anthropogenic and natural sources, and Mg, Fe and Al had natural origins from weathered rocks and crustal materials. Cluster analysis (CA) was used to classify the 53 sample points into three groups of water pollution, high pollution, moderate pollution, and low pollution, reflecting influences from tributaries, power plants and vehicle exhaust, and agricultural activities, respectively. The results of the water quality index (WQI) indicate that water in the Huaihe River is heavily polluted by trace elements, so approximately 96% of the water in the Huaihe River is unsuitable for drinking. A health risk assessment using the hazard quotient and index (HQ/HI) recommended by the USEPA suggests that Co, Cd and Pb in the river could cause non-carcinogenic harm to human health. Copyright © 2017 Elsevier B.V. All rights reserved.
Approximations of e and ?: An Exploration
ERIC Educational Resources Information Center
Brown, Philip R.
2017-01-01
Fractional approximations of e and p are discovered by searching for repetitions or partial repetitions of digit strings in their expansions in different number bases. The discovery of such fractional approximations is suggested for students and teachers as an entry point into mathematics research.
Organic Compounds in Clackamas River Water Used for Public Supply near Portland, Oregon, 2003-05
Carpenter, Kurt D.; McGhee, Gordon
2009-01-01
Organic compounds studied in this U.S. Geological Survey (USGS) assessment generally are man-made, including pesticides, gasoline hydrocarbons, solvents, personal care and domestic-use products, disinfection by-products, and manufacturing additives. In all, 56 compounds were detected in samples collected approximately monthly during 2003-05 at the intake for the Clackamas River Water plant, one of four community water systems on the lower Clackamas River. The diversity of compounds detected suggests a variety of different sources and uses (including wastewater discharges, industrial, agricultural, domestic, and others) and different pathways to drinking-water supplies (point sources, precipitation, overland runoff, ground-water discharge, and formation during water treatment). A total of 20 organic compounds were commonly detected (in at least 20 percent of the samples) in source water and (or) finished water. Fifteen compounds were commonly detected in source water, and five of these compounds (benzene, m- and p-xylene, diuron, simazine, and chloroform) also were commonly detected in finished water. With the exception of gasoline hydrocarbons, disinfection by-products, chloromethane, and the herbicide diuron, concentrations in source and finished water were less than 0.1 microgram per liter and always less than human-health benchmarks, which are available for about 60 percent of the compounds detected. On the basis of this screening-level assessment, adverse effects to human health are assumed to be negligible (subject to limitations of available human-health benchmarks).
Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources
NASA Astrophysics Data System (ADS)
Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato
2017-04-01
Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shwetha, Bondel; Ravikumar, Manickam, E-mail: drravikumarm@gmail.com; Supe, Sanjay S.
2012-04-01
Various treatment planning systems are used to design plans for the treatment of cervical cancer using high-dose-rate brachytherapy. The purpose of this study was to make a dosimetric comparison of the 2 treatment planning systems from Varian medical systems, namely ABACUS and BrachyVision. The dose distribution of Ir-192 source generated with a single dwell position was compared using ABACUS (version 3.1) and BrachyVision (version 6.5) planning systems. Ten patients with intracavitary applications were planned on both systems using orthogonal radiographs. Doses were calculated at the prescription points (point A, right and left) and reference points RU, LU, RM, LM, bladder,more » and rectum. For single dwell position, little difference was observed in the doses to points along the perpendicular bisector. The mean difference between ABACUS and BrachyVision for these points was 1.88%. The mean difference in the dose calculated toward the distal end of the cable by ABACUS and BrachyVision was 3.78%, whereas along the proximal end the difference was 19.82%. For the patient case there was approximately 2% difference between ABACUS and BrachyVision planning for dose to the prescription points. The dose difference for the reference points ranged from 0.4-1.5%. For bladder and rectum, the differences were 5.2% and 13.5%, respectively. The dose difference between the rectum points was statistically significant. There is considerable difference between the dose calculations performed by the 2 treatment planning systems. It is seen that these discrepancies are caused by the differences in the calculation methodology adopted by the 2 systems.« less
The Einstein All-Sky Slew Survey
NASA Technical Reports Server (NTRS)
Elvis, Martin S.
1992-01-01
The First Einstein IPC Slew Survey produced a list of 819 x-ray sources, with f(sub x) approximately 10(exp -12) - 10(exp -10) erg/sq cm s and positional accuracy of approximately 1.2 feet (90 percent radius). The aim of this program was to identify these x-ray sources.
77 FR 20999 - Final Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-09
... set forth below: * Elevation in feet (NGVD) + Elevation in feet (NAVD) Depth in feet Flooding source(s..., and Incorporated Areas Docket No.: FEMA-B-1100 Mississippi River Approximately 11.2 miles +585 City of.... Approximately 12.8 miles +594 upstream of State Highway 136. * National Geodetic Vertical Datum. + North...
78 FR 6743 - Final Flood Elevation Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-31
... in feet (NGVD) + Elevation in feet (NAVD) Flooding source(s) Location of referenced Depth in feet... downstream of Greely Allen County. Chapel Road. Approximately 750 feet + 965 upstream of Faulkner Road. Dug.... Approximately 100 feet + 827 downstream of North Cable Road. Dug Run Tributary At the Dug Run confluence + 813...
Ghost imaging with bucket detection and point detection
NASA Astrophysics Data System (ADS)
Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao
2018-04-01
We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
Dosimetric parameters of three new solid core I‐125 brachytherapy sources
Solberg, Timothy D.; DeMarco, John J.; Hugo, Geoffrey; Wallace, Robert E.
2002-01-01
Monte Carlo calculations and TLD measurements have been performed for the purpose of characterizing dosimetric properties of new commercially available brachytherapy sources. All sources tested consisted of a solid core, upon which a thin layer of I125 has been adsorbed, encased within a titanium housing. The PharmaSeed BT‐125 source manufactured by Syncor is available in silver or palladium core configurations while the ADVANTAGE source from IsoAid has silver only. Dosimetric properties, including the dose rate constant, radial dose function, and anisotropy characteristics were determined according to the TG‐43 protocol. Additionally, the geometry function was calculated exactly using Monte Carlo and compared with both the point and line source approximations. The 1999 NIST standard was followed in determining air kerma strength. Dose rate constants were calculated to be 0.955±0.005,0.967±0.005, and 0.962±0.005 cGyh−1U−1 for the PharmaSeed BT‐125‐1, BT‐125‐2, and ADVANTAGE sources, respectively. TLD measurements were in excellent agreement with Monte Carlo calculations. Radial dose function, g(r), calculated to a distance of 10 cm, and anisotropy function F(r, θ), calculated for radii from 0.5 to 7.0 cm, were similar among all source configurations. Anisotropy constants, ϕ¯an, were calculated to be 0.941, 0.944, and 0.960 for the three sources, respectively. All dosimetric parameters were found to be in close agreement with previously published data for similar source configurations. The MCNP Monte Carlo code appears to be ideally suited to low energy dosimetry applications. PACS number(s): 87.53.–j PMID:11958652
NASA Astrophysics Data System (ADS)
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
NASA Astrophysics Data System (ADS)
Golovaty, Yuriy
2018-06-01
We construct a norm resolvent approximation to the family of point interactions , by Schrödinger operators with localized rank-two perturbations coupled with short range potentials. In particular, a new approximation to the -interactions is obtained.
Kernel K-Means Sampling for Nyström Approximation.
He, Li; Zhang, Hong
2018-05-01
A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.
MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID
Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.
2001-12-01
In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.
A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu
We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less
NASA Astrophysics Data System (ADS)
Cao, Xiaochao; Fang, Feiyun; Wang, Zhaoying; Lin, Qiang
2017-10-01
We report a study on dynamical evolution of the ultrashort time-domain dark hollow Gaussian (TDHG) pulses beyond the slowly varying envelope approximation in homogenous plasma. Using the complex-source-point model, an analytical formula is proposed for describing TDHG pulses based on the oscillating electric dipoles, which is the exact solution of the Maxwell's equations. The numerical simulations show the relativistic longitudinal self-compression (RSC) due to the relativistic mass variation of moving electrons. The influences of plasma oscillation frequency and collision effect on dynamics of the TDHG pulses in plasma have been considered. Furthermore, we analyze the evolution of instantaneous energy density of the TDHG pulses on axis as well as the off axis condition.
Passive Imaging in Nondiffuse Acoustic Wavefields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mulargia, Francesco; Castellaro, Silvia
2008-05-30
A main property of diffuse acoustic wavefields is that, taken any two points, each of them can be seen as the source of waves and the other as the recording station. This property is shown to follow simply from array azimuthal selectivity and Huygens principle in a locally isotropic wavefield. Without time reversal, this property holds approximately also in anisotropic azimuthally uniform wavefields, implying much looser constraints for undistorted passive imaging than those required by a diffuse field. A notable example is the seismic noise field, which is generally nondiffuse, but is found to be compatible with a finite aperturemore » anisotropic uniform wavefield. The theoretical predictions were confirmed by an experiment on seismic noise in the mainland of Venice, Italy.« less
Atmospheric transportation of marihuana pollen from North Africa to the Southwest of Europe
NASA Astrophysics Data System (ADS)
Cabezudo, Baltasar; Recio, Marta; Sánchez-Laulhé, JoséMaŕia; Trigo, María Del Mar; Toro, Francisco Javier; Polvorinos, Fausto
As a result of aerobiological samples taken on the Costa del Sol (S. Spain), Cannabis sativa L. (marihuana) pollen was detected from May to September 1991-1996, always sporadically and usually during the afternoons. Sampling was by two volumetric spore traps set up in Malaga and Estepona, two coastal towns approximately 90 km apart. A study of the days when this pollen was recorded points to the movement of air masses from North Africa to southern Spain. Furthermore, the isentropic air trajectories calculated for these days reinforce the possibility of the pollen originating in marihuana plantations in northern Morocco (Rif). This study demonstrates the application of aerobiology to the control of the source, quantity and phenology of the crop.
NASA Technical Reports Server (NTRS)
Roth, Donald J (Inventor)
2011-01-01
A computer implemented process for simultaneously measuring the velocity of terahertz electromagnetic radiation in a dielectric material sample without prior knowledge of the thickness of the sample and for measuring the thickness of a material sample using terahertz electromagnetic radiation in a material sample without prior knowledge of the velocity of the terahertz electromagnetic radiation in the sample is disclosed and claimed. Utilizing interactive software the process evaluates, in a plurality of locations, the sample for microstructural variations and for thickness variations and maps the microstructural and thickness variations by location. A thin sheet of dielectric material may be used on top of the sample to create a dielectric mismatch. The approximate focal point of the radiation source (transceiver) is initially determined for good measurements.
NASA Technical Reports Server (NTRS)
Roth, Donald J (Inventor)
2011-01-01
A process for simultaneously measuring the velocity of terahertz electromagnetic radiation in a dielectric material sample without prior knowledge of the thickness of the sample and for measuring the thickness of a material sample using terahertz electromagnetic radiation in a material sample without prior knowledge of the velocity of the terahertz electromagnetic radiation in the sample is disclosed and claimed. The process evaluates, in a plurality of locations, the sample for microstructural variations and for thickness variations and maps the microstructural and thickness variations by location. A thin sheet of dielectric material may be used on top of the sample to create a dielectric mismatch. The approximate focal point of the radiation source (transceiver) is initially determined for good measurements.
NASA Technical Reports Server (NTRS)
Mason, G. M.; Ng, C. K.; Klecker, B.; Green, G.
1989-01-01
Impulsive solar energetic particle (SEP) events are studied to: (1) describe a distinct class of SEP ion events observed in interplanetary space, and (2) test models of focused transport through detailed comparisons of numerical model prediction with the data. An attempt will also be made to describe the transport and scattering properties of the interplanetary medium during the times these events are observed and to derive source injection profiles in these events. ISEE 3 and Helios 1 magnetic field and plasma data are used to locate the approximate coronal connection points of the spacecraft to organize the particle anisotropy data and to constrain some free parameters in the modeling of flare events.
Femto-second synchronisation with a waveguide interferometer
NASA Astrophysics Data System (ADS)
Dexter, A. C.; Smith, S. J.; Woolley, B. J.; Grudiev, A.
2018-03-01
CERN's compact linear collider CLIC requires crab cavities on opposing linacs to rotate bunches of particles into alignment at the interaction point (IP). These cavities are located approximately 25 metres either side of the IP. The luminosity target requires synchronisation of their RF phases to better than 5 fs r.m.s. This is to be achieved by powering both cavities from one high power RF source, splitting the power and delivering it along two waveguide paths that are controlled to be identical in length to within a micrometre. The waveguide will be operated as an interferometer. A high power phase shifter for adjusting path lengths has been successfully developed and operated in an interferometer. The synchronisation target has been achieved in a low power prototype system.
NASA Astrophysics Data System (ADS)
Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.
2017-07-01
Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.
NASA Technical Reports Server (NTRS)
Frank, David R.; Zolensky, M. E.; Le, L.; Weisberg, M. K.; Kimura, M.
2013-01-01
The Stardust Mission returned a large fraction of high-temperature, crystalline material that was radially transported from the inner solar system to the Kuiper Belt [1,2]. The mineralogical diversity found in this single cometary collection points to an even greater number of source materials than most primitive chondrites. In particular, the type II olivine found in Wild 2 includes the three distinct Fe/Mn ratios found in the matrix and chondrules of carbonaceous chondrites (CCs) and unequilibrated ordinary chondrites (UOCs) [3]. We also find that low-Ca pyroxene is quite variable (approximately Fs3-29) and is usually indistinguishable from CC, UOC, and EH3 pyroxene as well. However, occasional olivine and pyroxene compositions are found in Wild 2 that are inconsistent with chondrites. The Stardust track 61 terminal particle (TP) is one such example and is the focus of this study. It s highly reduced forsterite and enstatite is consistent only with that in Aubrites, in which FeO is essentially absent from these phases (less than approximately 0.1 wt.% FeO) [4].
Ray propagation in oblate atmospheres. [for Jupiter
NASA Technical Reports Server (NTRS)
Hubbard, W. B.
1976-01-01
Phinney and Anderson's (1968) exact theory for the inversion of radio-occultation data for planetary atmospheres breaks down seriously when applied to occultations by oblate atmospheres because of departures from Bouguer's law. It has been proposed that this breakdown can be overcome by transforming the theory to a local spherical symmetry which osculates a ray's point of closest approach. The accuracy of this transformation procedure is assessed by evaluating the size of terms which are intrinsic to an oblate atmosphere and which are not eliminated by a local spherical approximation. The departures from Bouguer's law are analyzed, and it is shown that in the lowest-order deviation from that law, the plane of refraction is defined by the normal to the atmosphere at closest approach. In the next order, it is found that the oblateness of the atmosphere 'warps' the ray path out of a single plane, but the effect appears to be negligible for most purposes. It is concluded that there seems to be no source of serious error in making an approximation of local spherical symmetry with the refraction plane defined by the normal at closest approach.
Two copies of the Einstein-Podolsky-Rosen state of light lead to refutation of EPR ideas.
Rosołek, Krzysztof; Stobińska, Magdalena; Wieśniak, Marcin; Żukowski, Marek
2015-03-13
Bell's theorem applies to the normalizable approximations of original Einstein-Podolsky-Rosen (EPR) state. The constructions of the proof require measurements difficult to perform, and dichotomic observables. By noticing the fact that the four mode squeezed vacuum state produced in type II down-conversion can be seen both as two copies of approximate EPR states, and also as a kind of polarization supersinglet, we show a straightforward way to test violations of the EPR concepts with direct use of their state. The observables involved are simply photon numbers at outputs of polarizing beam splitters. Suitable chained Bell inequalities are based on the geometric concept of distance. For a few settings they are potentially a new tool for quantum information applications, involving observables of a nondichotomic nature, and thus of higher informational capacity. In the limit of infinitely many settings we get a Greenberger-Horne-Zeilinger-type contradiction: EPR reasoning points to a correlation, while quantum prediction is an anticorrelation. Violations of the inequalities are fully resistant to multipair emissions in Bell experiments using parametric down-conversion sources.
The Stochastic X-Ray Variability of the Accreting Millisecond Pulsar MAXI J0911-655
NASA Technical Reports Server (NTRS)
Bult, Peter
2017-01-01
In this work, I report on the stochastic X-ray variability of the 340 hertz accreting millisecond pulsar MAXI J0911-655. Analyzing pointed observations of the XMM-Newton and NuSTAR observatories, I find that the source shows broad band-limited stochastic variability in the 0.01-10 hertz range with a total fractional variability of approximately 24 percent root mean square timing residuals in the 0.4 to 3 kiloelectronvolt energy band that increases to approximately 40 percent root mean square timing residuals in the 3 to 10 kiloelectronvolt band. Additionally, a pair of harmonically related quasi-periodic oscillations (QPOs) are discovered. The fundamental frequency of this harmonic pair is observed between frequencies of 62 and 146 megahertz. Like the band-limited noise, the amplitudes of the QPOs show a steep increase as a function of energy; this suggests that they share a similar origin, likely the inner accretion flow. Based on their energy dependence and frequency relation with respect to the noise terms, the QPOs are identified as low-frequency oscillations and discussed in terms of the Lense-Thirring precession model.
NASA Astrophysics Data System (ADS)
Corrales, Lia
2015-05-01
X-ray bright quasars might be used to trace dust in the circumgalactic and intergalactic medium through the phenomenon of X-ray scattering, which is observed around Galactic objects whose light passes through a sufficient column of interstellar gas and dust. Of particular interest is the abundance of gray dust larger than 0.1 μ m, which is difficult to detect at other wavelengths. To calculate X-ray scattering from large grains, one must abandon the traditional Rayleigh-Gans approximation. The Mie solution for the X-ray scattering optical depth of the universe is ∼ 1%. This presents a great difficulty for distinguishing dust scattered photons from the point source image of Chandra, which is currently unsurpassed in imaging resolution. The variable nature of AGNs offers a solution to this problem, as scattered light takes a longer path and thus experiences a time delay with respect to non-scattered light. If an AGN dims significantly (≳ 3 dex) due to a major feedback event, the Chandra point source image will be suppressed relative to the scattering halo, and an X-ray echo or ghost halo may become visible. I estimate the total number of scattering echoes visible by Chandra over the entire sky: {{N}ech}∼ {{10}3}({{ν }fb}/y{{r}-1}), where {{ν }fb} is the characteristic frequency of feedback events capable of dimming an AGN quickly.
Research study on stabilization and control: Modern sampled data control theory
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.; Yackel, R. A.
1973-01-01
A numerical analysis of spacecraft stability parameters was conducted. The analysis is based on a digital approximation by point by point state comparison. The technique used is that of approximating a continuous data system by a sampled data model by comparison of the states of the two systems. Application of the method to the digital redesign of the simplified one axis dynamics of the Skylab is presented.