Sample records for large point source

  1. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  2. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  3. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  4. Major and Trace Element Fluxes to the Ganges River: Significance of Small Flood Plain Tributary as Non-Point Pollution Source

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Sen, I. S.; Mishra, G.

    2017-12-01

    There has been much discussion amongst biologists, ecologists, chemists, geologists, environmental firms, and science policy makers about the impact of human activities on river health. As a result, multiple river restoration projects are on going on many large river basins around the world. In the Indian subcontinent, the Ganges River is the focal point of all restoration actions as it provides food and water security to half a billion people. Serious concerns have been raised about the quality of Ganga water as toxic chemicals and many more enters the river system through point-sources such as direct wastewater discharge to rivers, or non-point-sources. Point source pollution can be easily identified and remedial actions can be taken; however, non-point pollution sources are harder to quantify and mitigate. A large non-point pollution source in the Indo-Gangetic floodplain is the network of small floodplain rivers. However, these rivers are rarely studied since they are small in catchment area ( 1000-10,000 km2) and discharge (<100 m3/s). As a result, the impact of these small floodplain rivers on the dissolved chemical load of large river systems is not constrained. To fill this knowledge gap we have monitored the Pandu River for one year between February 2015 and April 2016. Pandu river is 242 km long and is a right bank tributary of Ganges with a total catchment area of 1495 km2. Water samples were collected every month for dissolved major and trace elements. Here we show that the concentration of heavy metals in river Pandu is in higher range as compared to the world river average, and all the dissolved elements shows a large spatial-temporal variation. We show that the Pandu river exports 192170, 168517, 57802, 32769, 29663, 1043, 279, 241, 225, 162, 97, 28, 25, 22, 20, 8, 4 Kg/yr of Ca, Na, Mg, K, Si, Sr, Zn, B, Ba, Mn, Al, Li, Rb, Mo, U, Cu, and Sb, respectively, to the Ganga river, and the exported chemical flux effects the water chemistry of the Ganga river downstream of its confluence point. We further speculate that small floodplain rivers is an important source that contributes to the dissolved chemical budget of large river systems, and they must be better monitored to address future challenges in river basin management.

  5. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  6. Study on road surface source pollution controlled by permeable pavement

    NASA Astrophysics Data System (ADS)

    Zheng, Chaocheng

    2018-06-01

    The increase of impermeable pavement in urban construction not only increases the runoff of the pavement, but also produces a large number of Non-Point Source Pollution. In the process of controlling road surface runoff by permeable pavement, a large number of particulate matter will be withheld when rainwater is being infiltrated, so as to control the source pollution at the source. In this experiment, we determined the effect of permeable road surface to remove heavy pollutants in the laboratory and discussed the related factors that affect the non-point pollution of permeable pavement, so as to provide a theoretical basis for the application of permeable pavement.

  7. Very Luminous X-ray Point Sources in Starburst Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.

    Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.

  8. Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu

    2015-05-01

    Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less

  9. Using Soluble Reactive Phosphorus and Ammonia to Identify Point Source Discharge from Large Livestock Facilities

    NASA Astrophysics Data System (ADS)

    Borrello, M. C.; Scribner, M.; Chessin, K.

    2013-12-01

    A growing body of research draws attention to the negative environmental impacts on surface water from large livestock facilities. These impacts are mostly in the form of excessive nutrient loading resulting in significantly decreased oxygen levels. Over-application of animal waste on fields as well as direct discharge into surface water from facilities themselves has been identified as the main contributor to the development of hypoxic zones in Lake Erie, Chesapeake Bay and the Gulf of Mexico. Some regulators claim enforcement of water quality laws is problematic because of the nature and pervasiveness of non-point source impacts. Any direct discharge by a facility is a violation of permits governed by the Clean Water Act, unless the facility has special dispensation for discharge. Previous research by the principal author and others has shown runoff and underdrain transport are the main mechanisms by which nutrients enter surface water. This study utilized previous work to determine if the effects of non-point source discharge can be distinguished from direct (point-source) discharge using simple nutrient analysis and dissolved oxygen (DO) parameters. Nutrient and DO parameters were measured from three sites: 1. A stream adjacent to a field receiving manure, upstream of a large livestock facility with a history of direct discharge, 2. The same stream downstream of the facility and 3. A stream in an area relatively unimpacted by large-scale agriculture (control site). Results show that calculating a simple Pearson correlation coefficient (r) of soluble reactive phosphorus (SRP) and ammonia over time as well as temperature and DO, distinguishes non-point source from point source discharge into surface water. The r value for SRP and ammonia for the upstream site was 0.01 while the r value for the downstream site was 0.92. The control site had an r value of 0.20. Likewise, r values were calculated on temperature and DO for each site. High negative correlations between temperature and DO are indicative of a relatively unimpacted stream. Results from this study are commensurate with nutrient correlations and are: r = -0.97 for the upstream site, r = -0.21 for the downstream site and r = -0.89 for the control site. Results from every site tested were statistically significant (p ≤ 0.05). These results support previous studies and demonstrate that the simple analytical techniques mentioned provide an effective means for regulatory agencies and community groups to monitor and identify point source discharge from large livestock facilities.

  10. A selective array activation method for the generation of a focused source considering listening position.

    PubMed

    Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann

    2012-02-01

    A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America

  11. Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States

    USGS Publications Warehouse

    Puckett, Larry J.

    1994-01-01

    Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.

  12. Combining stable isotopes with contamination indicators: A method for improved investigation of nitrate sources and dynamics in aquifers with mixed nitrogen inputs.

    PubMed

    Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E

    2017-11-01

    Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s  = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s  > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  14. Heavy metal transport in large river systems: heavy metal emissions and loads in the Rhine and Elbe river basins

    NASA Astrophysics Data System (ADS)

    Vink, Rona; Behrendt, Horst

    2002-11-01

    Pollutant transport and management in the Rhine and Elbe basins is still of international concern, since certain target levels set by the international committees for protection of both rivers have not been reached. The analysis of the chain of emissions of point and diffuse sources to river loads will provide policy makers with a tool for effective management of river basins. The analysis of large river basins such as the Elbe and Rhine requires information on the spatial and temporal characteristics of both emissions and physical information of the entire river basin. In this paper, an analysis has been made of heavy metal emissions from various point and diffuse sources in the Rhine and Elbe drainage areas. Different point and diffuse pathways are considered in the model, such as inputs from industry, wastewater treatment plants, urban areas, erosion, groundwater, atmospheric deposition, tile drainage, and runoff. In most cases the measured heavy metal loads at monitoring stations are lower than the sum of the heavy metal emissions. This behaviour in large river systems can largely be explained by retention processes (e.g. sedimentation) and is dependent on the specific runoff of a catchment. Independent of the method used to estimate emissions, the source apportionment analysis of observed loads was used to determine the share of point and diffuse sources in the heavy metal load at a monitoring station by establishing a discharge dependency. The results from both the emission analysis and the source apportionment analysis of observed loads were compared and gave similar results. Between 51% (for Hg) and 74% (for Pb) of the total transport in the Elbe basin is supplied by inputs from diffuse sources. In the Rhine basin diffuse source inputs dominate the total transport and deliver more than 70% of the total transport. The diffuse hydrological pathways with the highest share are erosion and urban areas.

  15. A spatial model to aggregate point-source and nonpoint-source water-quality data for large areas

    USGS Publications Warehouse

    White, D.A.; Smith, R.A.; Price, C.V.; Alexander, R.B.; Robinson, K.W.

    1992-01-01

    More objective and consistent methods are needed to assess water quality for large areas. A spatial model, one that capitalizes on the topologic relationships among spatial entities, to aggregate pollution sources from upstream drainage areas is described that can be implemented on land surfaces having heterogeneous water-pollution effects. An infrastructure of stream networks and drainage basins, derived from 1:250,000-scale digital-elevation models, define the hydrologic system in this spatial model. The spatial relationships between point- and nonpoint pollution sources and measurement locations are referenced to the hydrologic infrastructure with the aid of a geographic information system. A maximum-branching algorithm has been developed to simulate the effects of distance from a pollutant source to an arbitrary downstream location, a function traditionally employed in deterministic water quality models. ?? 1992.

  16. Water-quality assessment of the largely urban blue river basin, Metropolitan Kansas City, USA, 1998 to 2007

    USGS Publications Warehouse

    Wilkison, D.H.; Armstrong, D.J.; Hampton, S.A.

    2009-01-01

    From 1998 through 2007, over 750 surface-water or bed-sediment samples in the Blue River Basin - a largely urban basin in metropolitan Kansas City - were analyzed for more than 100 anthropogenic compounds. Compounds analyzed included nutrients, fecal-indicator bacteria, suspended sediment, pharmaceuticals and personal care products. Non-point source runoff, hydrologic alterations, and numerous waste-water discharge points resulted in the routine detection of complex mixtures of anthropogenic compounds in samples from basin stream sites. Temporal and spatial variations in concentrations and loads of nutrients, pharmaceuticals, and organic wastewater compounds were observed, primarily related to a site's proximity to point-source discharges and stream-flow dynamics. ?? 2009 ASCE.

  17. Statistical Characterization of the Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Primini, Francis A.; Houck, John C.; Davis, John E.; Nowak, Michael A.; Evans, Ian N.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G.; Grier, John D.; Hain, Roger M.; Hall, Diane M.; Harbo, Peter N.; He, Xiangqun Helen; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael S.; Van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2011-06-01

    The first release of the Chandra Source Catalog (CSC) contains ~95,000 X-ray sources in a total area of 0.75% of the entire sky, using data from ~3900 separate ACIS observations of a multitude of different types of X-ray sources. In order to maximize the scientific benefit of such a large, heterogeneous data set, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Characterization efforts of other large Chandra catalogs, such as the ChaMP Point Source Catalog or the 2 Mega-second Deep Field Surveys, while informative, cannot serve this purpose, since the CSC analysis procedures are significantly different and the range of allowable data is much less restrictive. We describe here the characterization process for the CSC. This process includes both a comparison of real CSC results with those of other, deeper Chandra catalogs of the same targets and extensive simulations of blank-sky and point-source populations.

  18. 40 CFR 428.70 - Applicability; description of the large-sized general molded, extruded, and fabricated rubber...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RUBBER MANUFACTURING POINT SOURCE CATEGORY Large-Sized General Molded, Extruded, and Fabricated Rubber..., foam rubber backing, rubber cement-dipped goods, and retreaded tires by large-sized plants...

  19. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  20. GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, M.

    1959-06-01

    GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less

  1. Light refocusing with up-scalable resonant waveguide gratings in confocal prolate spheroid arrangements

    NASA Astrophysics Data System (ADS)

    Quaranta, Giorgio; Basset, Guillaume; Benes, Zdenek; Martin, Olivier J. F.; Gallinet, Benjamin

    2018-01-01

    Resonant waveguide gratings (RWGs) are thin-film structures, where coupled modes interfere with the diffracted incoming wave and produce strong angular and spectral filtering. The combination of two finite-length and impedance matched RWGs allows the creation of a passive beam steering element, which is compatible with up-scalable fabrication processes. Here, we propose a design method to create large patterns of such elements able to filter, steer, and focus the light from one point source to another. The method is based on ellipsoidal mirrors to choose a system of confocal prolate spheroids where the two focal points are the source point and observation point, respectively. It allows finding the proper orientation and position of each RWG element of the pattern, such that the phase is constructively preserved at the observation point. The design techniques presented here could be implemented in a variety of systems, where large-scale patterns are needed, such as optical security, multifocal or monochromatic lenses, biosensors, and see-through optical combiners for near-eye displays.

  2. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    NASA Astrophysics Data System (ADS)

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  3. Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations

    NASA Astrophysics Data System (ADS)

    Lonsdale, Carol J.; Hacking, Perry B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.

  4. Galaxy evolution and large-scale structure in the far-infrared. I. IRAS pointed observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lonsdale, C.J.; Hacking, P.B.

    1989-04-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained inmore » terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution. 81 refs.« less

  5. Galaxy evolution and large-scale structure in the far-infrared. I - IRAS pointed observations

    NASA Technical Reports Server (NTRS)

    Lonsdale, Carol J.; Hacking, Perry B.

    1989-01-01

    Redshifts for 66 galaxies were obtained from a sample of 93 60-micron sources detected serendipitously in 22 IRAS deep pointed observations, covering a total area of 18.4 sq deg. The flux density limit of this survey is 150 mJy, 4 times fainter than the IRAS Point Source Catalog (PSC). The luminosity function is similar in shape with those previously published for samples selected from the PSC, with a median redshift of 0.048 for the fainter sample, but shifted to higher space densities. There is evidence that some of the excess number counts in the deeper sample can be explained in terms of a large-scale density enhancement beyond the Pavo-Indus supercluster. In addition, the faintest counts in the new sample confirm the result of Hacking et al. (1989) that faint IRAS 60-micron source counts lie significantly in excess of an extrapolation of the PSC counts assuming no luminosity or density evolution.

  6. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  7. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  8. Analysis of non-point and point source pollution in China: case study in Shima Watershed in Guangdong Province

    NASA Astrophysics Data System (ADS)

    Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei

    2013-09-01

    China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.

  9. Test method for telescopes using a point source at a finite distance

    NASA Technical Reports Server (NTRS)

    Griner, D. B.; Zissa, D. E.; Korsch, D.

    1985-01-01

    A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.

  10. The VLITE Post-Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.

    2018-01-01

    A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE; ) on the Karl G. Jansky Very Large Array (VLA). In contrast to other radio sky surveys, the commensal observing mode of VLITE results in varying depths, sensitivities, and spatial resolutions across the sky based on the configuration of the VLA, location on the sky, and time on source specified by the primary observer for their independent science objectives. Therefore, previously developed tools and methods for generating source catalogs and survey statistics are not always appropriate for VLITE's diverse and growing set of data. A raw catalog of point sources extracted from every VLITE image will be created from source fit parameters stored in a queryable database. Point sources will be measured using the Python Blob Detector and Source Finder software (PyBDSF; Mohan & Rafferty 2015). Sources in the raw catalog will be associated with previous VLITE detections in a resolution- and sensitivity-dependent manner, and cross-matched to other radio sky surveys to aid in the detection of transient and variable sources. Final data products will include separate, tiered point source catalogs grouped by sensitivity limit and spatial resolution.

  11. Time-Domain Filtering for Spatial Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1997-01-01

    An approach to large-eddy simulation (LES) is developed whose subgrid-scale model incorporates filtering in the time domain, in contrast to conventional approaches, which exploit spatial filtering. The method is demonstrated in the simulation of a heated, compressible, axisymmetric jet, and results are compared with those obtained from fully resolved direct numerical simulation. The present approach was, in fact, motivated by the jet-flow problem and the desire to manipulate the flow by localized (point) sources for the purposes of noise suppression. Time-domain filtering appears to be more consistent with the modeling of point sources; moreover, time-domain filtering may resolve some fundamental inconsistencies associated with conventional space-filtered LES approaches.

  12. Tracking Nitrogen Sources, Transformation, and Transport at a Basin Scale with Complex Plain River Networks.

    PubMed

    Yi, Qitao; Chen, Qiuwen; Hu, Liuming; Shi, Wenqing

    2017-05-16

    This research developed an innovative approach to reveal nitrogen sources, transformation, and transport in large and complex river networks in the Taihu Lake basin using measurement of dual stable isotopes of nitrate. The spatial patterns of δ 15 N corresponded to the urbanization level, and the nitrogen cycle was associated with the hydrological regime at the basin level. During the high flow season of summer, nonpoint sources from fertilizer/soils and atmospheric deposition constituted the highest proportion of the total nitrogen load. The point sources from sewage/manure, with high ammonium concentrations and high δ 15 N and δ 18 O contents in the form of nitrate, accounted for the largest inputs among all sources during the low flow season of winter. Hot spot areas with heavy point source pollution were identified, and the pollutant transport routes were revealed. Nitrification occurred widely during the warm seasons, with decreased δ 18 O values; whereas great potential for denitrification existed during the low flow seasons of autumn and spring. The study showed that point source reduction could have effects over the short-term; however, long-term efforts to substantially control agriculture nonpoint sources are essential to eutrophication alleviation for the receiving lake, which clarifies the relationship between point and nonpoint source control.

  13. Improved moving source photometry with TRIPPy

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Fraser, Wesley Cristopher

    2017-10-01

    Photometry of moving sources is more complicated than for stationary sources, because the sources trail their signal out over more pixels than a point source of the same magnitude. Using a circular aperture of same size as would be appropriate for point sources can cut out a large amount of flux if a moving source moves substantially relative to the size of the aperture during the exposure, resulting in underestimated fluxes. Using a large circular aperture can mitigate this issue at the cost of a significantly reduced signal to noise compared to a point source, as a result of the inclusion of a larger background region within the aperture.Trailed Image Photometry in Python (TRIPPy) solves this problem by using a pill-shaped aperture: the traditional circular aperture is sliced in half perpendicular to the direction of motion and separated by a rectangle as long as the total motion of the source during the exposure. TRIPPy can also calculate the appropriate aperture correction (which will depend both on the radius and trail length of the pill-shaped aperture), and has features for selecting good PSF stars, creating a PSF model (convolved moffat profile + lookup table) and selecting a custom sky-background area in order to ensure no other sources contribute to the background estimate.In this poster, we present an overview of the TRIPPy features and demonstrate the improvements resulting from using TRIPPy compared to photometry obtained by other methods with examples from real projects where TRIPPy has been implemented in order to obtain the best-possible photometric measurements of Solar System objects. While TRIPPy has currently mainly been used for Trans-Neptunian Objects, the improvement from using the pill-shaped aperture increases with source motion, making TRIPPy highly relevant for asteroid and centaur photometry as well.

  14. Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams

    NASA Astrophysics Data System (ADS)

    Zhong, Xu; Kealy, Allison; Duckham, Matt

    2016-05-01

    Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.

  15. Innovations in the Analysis of Chandra-ACIS Observations

    NASA Astrophysics Data System (ADS)

    Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.

    2010-05-01

    As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.

  16. Extracting spatial information from large aperture exposures of diffuse sources

    NASA Technical Reports Server (NTRS)

    Clarke, J. T.; Moos, H. W.

    1981-01-01

    The spatial properties of large aperture exposures of diffuse emission can be used both to investigate spatial variations in the emission and to filter out camera noise in exposures of weak emission sources. Spatial imaging can be accomplished both parallel and perpendicular to dispersion with a resolution of 5-6 arc sec, and a narrow median filter running perpendicular to dispersion across a diffuse image selectively filters out point source features, such as reseaux marks and fast particle hits. Spatial information derived from observations of solar system objects is presented.

  17. Opening the black box: evaluation of nutrient nonpoint source management for estuarine watersheds

    EPA Science Inventory

    Over the last 40 years, there have been significant improvements in water quality and ecosystem condition in estuaries stressed by nutrient enrichment. However, documented improvements have been largely attributed to reductions in point sources. In contrast, improvement of coasta...

  18. Fast computation of quadrupole and hexadecapole approximations in microlensing with a single point-source evaluation

    NASA Astrophysics Data System (ADS)

    Cassan, Arnaud

    2017-07-01

    The exoplanet detection rate from gravitational microlensing has grown significantly in recent years thanks to a great enhancement of resources and improved observational strategy. Current observatories include ground-based wide-field and/or robotic world-wide networks of telescopes, as well as space-based observatories such as satellites Spitzer or Kepler/K2. This results in a large quantity of data to be processed and analysed, which is a challenge for modelling codes because of the complexity of the parameter space to be explored and the intensive computations required to evaluate the models. In this work, I present a method that allows to compute the quadrupole and hexadecapole approximations of the finite-source magnification with more efficiency than previously available codes, with routines about six times and four times faster, respectively. The quadrupole takes just about twice the time of a point-source evaluation, which advocates for generalizing its use to large portions of the light curves. The corresponding routines are available as open-source python codes.

  19. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  20. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    USGS Publications Warehouse

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.

  1. A Global Catalogue of Large SO2 Sources and Emissions Derived from the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Fioletov, Vitali E.; McLinden, Chris A.; Krotkov, Nickolay; Li, Can; Joiner, Joanna; Theys, Nicolas; Carn, Simon; Moran, Mike D.

    2016-01-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor processed with the new principal component analysis (PCA) algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr(exp -1) to more than 4000 kt yr(exp -1) of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources); power plants (297); smelters (53); and sources related to the oil and gas industry (65). The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005- 2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30% of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80% over the 2005-2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East) remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr(exp -1) and not detected by OMI.

  2. NONPOINT SOURCE MODEL CALIBRATION IN HONEY CREEK WATERSHED

    EPA Science Inventory

    The U.S. EPA Non-Point Source Model has been applied and calibrated to a fairly large (187 sq. mi.) agricultural watershed in the Lake Erie Drainage basin of north central Ohio. Hydrologic and chemical routing algorithms have been developed. The model is evaluated for suitability...

  3. Denitrification and dilution along fracture flowpaths influence the recovery of a bedrock aquifer from nitrate contamination.

    PubMed

    Kim, Jonathan J; Comstock, Jeff; Ryan, Peter; Heindel, Craig; Koenigsberger, Stephan

    2016-11-01

    In 2000, elevated nitrate concentrations ranging from 12 to 34mg/L NO3N were discovered in groundwater from numerous domestic bedrock wells adjacent to a large dairy farm in central Vermont. Long-term plots and contours of nitrate vs. time for bedrock wells showed "little/no", "moderate", and "large" change patterns that were spatially separable. The metasedimentary bedrock aquifer is strongly anisotropic and groundwater flow is controlled by fractures, bedding/foliation, and basins and ridges in the bedrock surface. Integration of the nitrate concentration vs. time data and the physical and chemical aquifer characterization suggest two nitrate sources: a point source emanating from a waste ravine and a non-point source that encompasses the surrounding fields. Once removed, the point source of NO3 (manure deposited in a ravine) was exhausted and NO3 dropped from 34mg/L to <10mg/L after ~10years; however, persistence of NO3 in the 3 to 8mg/L range (background) reflects the long term flux of nitrates from nutrients applied to the farm fields surrounding the ravine over the years predating and including this study. Inferred groundwater flow rates from the waste ravine to either moderate change wells in basin 2 or to the shallow bedrock zone beneath the large change wells are 0.05m/day, well within published bedrock aquifer flow rates. Enrichment of (15)N and (18)O in nitrate is consistent with lithotrophic denitrification of NO3 in the presence of dissolved Mn and Fe. Once the ravine point-source was removed, denitrification and dilution collectively were responsible for the down-gradient decrease of nitrate in this bedrock aquifer. Denitrification was most influential when NO3N was >10mg/L. Our multidisciplinary methods of aquifer characterization are applicable to groundwater contamination in any complexly-deformed and metamorphosed bedrock aquifer. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A large point-source outbreak of Salmonella Typhimurium phage type 9 linked to a bakery in Sydney, March 2007.

    PubMed

    Mannes, Trish; Gupta, Leena; Craig, Adam; Rosewell, Alexander; McGuinness, Clancy Aimers; Musto, Jennie; Shadbolt, Craig; Biffin, Brian

    2010-03-01

    This report describes the investigation and public health response to a large point-source outbreak of salmonellosis in Sydney, Australia. The case-series investigation involved telephone interviews with 283 cases or their guardians and active surveillance through hospitals, general practitioners, laboratories and the public health network. In this outbreak 319 cases of gastroenteritis were identified, of which 221 cases (69%) presented to a hospital emergency department and 136 (43%) required hospital admission. This outbreak was unique in its scale and severity and the surge capacity of hospital emergency departments was stretched. It highlights that foodborne illness outbreaks can cause substantial preventable morbidity and resultant health service burden, requiring close attention to regulatory and non-regulatory interventions.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Rubab, E-mail: rubab@uw.edu

    We present Spitzer IRAC 3.6–8 μ m and Multiband Imaging Photometer 24 μ m point-source catalogs for M31 and 15 other mostly large, star-forming galaxies at distances ∼3.5–14 Mpc, including M51, M83, M101, and NGC 6946. These catalogs contain ∼1 million sources including ∼859,000 in M31 and ∼116,000 in the other galaxies. They were created following the procedures described in Khan et al. through a combination of point-spread function (PSF) fitting and aperture photometry. These data products constitute a resource to improve our understanding of the IR-bright (3.6–24 μ m) point-source populations in crowded extragalactic stellar fields and to planmore » observations with the James Webb Space Telescope .« less

  6. HerMES: point source catalogues from Herschel-SPIRE observations II

    NASA Astrophysics Data System (ADS)

    Wang, L.; Viero, M.; Clarke, C.; Bock, J.; Buat, V.; Conley, A.; Farrah, D.; Guo, K.; Heinis, S.; Magdis, G.; Marchetti, L.; Marsden, G.; Norberg, P.; Oliver, S. J.; Page, M. J.; Roehlly, Y.; Roseboom, I. G.; Schulz, B.; Smith, A. J.; Vaccari, M.; Zemcov, M.

    2014-11-01

    The Herschel Multi-tiered Extragalactic Survey (HerMES) is the largest Guaranteed Time Key Programme on the Herschel Space Observatory. With a wedding cake survey strategy, it consists of nested fields with varying depth and area totalling ˜380 deg2. In this paper, we present deep point source catalogues extracted from Herschel-Spectral and Photometric Imaging Receiver (SPIRE) observations of all HerMES fields, except for the later addition of the 270 deg2 HerMES Large-Mode Survey (HeLMS) field. These catalogues constitute the second Data Release (DR2) made in 2013 October. A sub-set of these catalogues, which consists of bright sources extracted from Herschel-SPIRE observations completed by 2010 May 1 (covering ˜74 deg2) were released earlier in the first extensive data release in 2012 March. Two different methods are used to generate the point source catalogues, the SUSSEXTRACTOR point source extractor used in two earlier data releases (EDR and EDR2) and a new source detection and photometry method. The latter combines an iterative source detection algorithm, STARFINDER, and a De-blended SPIRE Photometry algorithm. We use end-to-end Herschel-SPIRE simulations with realistic number counts and clustering properties to characterize basic properties of the point source catalogues, such as the completeness, reliability, photometric and positional accuracy. Over 500 000 catalogue entries in HerMES fields (except HeLMS) are released to the public through the HeDAM (Herschel Database in Marseille) website (http://hedam.lam.fr/HerMES).

  7. A deeper look at the X-ray point source population of NGC 4472

    NASA Astrophysics Data System (ADS)

    Joseph, T. D.; Maccarone, T. J.; Kraft, R. P.; Sivakoff, G. R.

    2017-10-01

    In this paper we discuss the X-ray point source population of NGC 4472, an elliptical galaxy in the Virgo cluster. We used recent deep Chandra data combined with archival Chandra data to obtain a 380 ks exposure time. We find 238 X-ray point sources within 3.7 arcmin of the galaxy centre, with a completeness flux, FX, 0.5-2 keV = 6.3 × 10-16 erg s-1 cm-2. Most of these sources are expected to be low-mass X-ray binaries. We finding that, using data from a single galaxy which is both complete and has a large number of objects (˜100) below 1038 erg s-1, the X-ray luminosity function is well fitted with a single power-law model. By cross matching our X-ray data with both space based and ground based optical data for NGC 4472, we find that 80 of the 238 sources are in globular clusters. We compare the red and blue globular cluster subpopulations and find red clusters are nearly six times more likely to host an X-ray source than blue clusters. We show that there is evidence that these two subpopulations have significantly different X-ray luminosity distributions. Source catalogues for all X-ray point sources, as well as any corresponding optical data for globular cluster sources, are also presented here.

  8. Detection of spatial fluctuations of non-point source fecal pollution in coral reef surrounding waters in southwestern Puerto Rico using PCR-based assays.

    PubMed

    Bonkosky, M; Hernández-Delgado, E A; Sandoz, B; Robledo, I E; Norat-Ramírez, J; Mattei, H

    2009-01-01

    Human fecal contamination of coral reefs is a major cause of concern. Conventional methods used to monitor microbial water quality cannot be used to discriminate between different fecal pollution sources. Fecal coliforms, enterococci, and human-specific Bacteroides (HF183, HF134), general Bacteroides-Prevotella (GB32), and Clostridium coccoides group (CP) 16S rDNA PCR assays were used to test for the presence of non-point source fecal contamination across the southwestern Puerto Rico shelf. Inshore waters were highly turbid, consistently receiving fecal pollution from variable sources, and showing the highest frequency of positive molecular marker signals. Signals were also detected at offshore waters in compliance with existing microbiological quality regulations. Phylogenetic analysis showed that most isolates were of human fecal origin. The geographic extent of non-point source fecal pollution was large and impacted extensive coral reef systems. This could have deleterious long-term impacts on public health, local fisheries and in tourism potential if not adequately addressed.

  9. Correcting STIS CCD Point-Source Spectra for CTE Loss

    NASA Technical Reports Server (NTRS)

    Goudfrooij, Paul; Bohlin, Ralph C.; Maiz-Apellaniz, Jesus

    2006-01-01

    We review the on-orbit spectroscopic observations that are being used to characterize the Charge Transfer Efficiency (CTE) of the STIS CCD in spectroscopic mode. We parameterize the CTE-related loss for spectrophotometry of point sources in terms of dependencies on the brightness of the source, the background level, the signal in the PSF outside the standard extraction box, and the time of observation. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (images of a tungsten lamp taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual spectra of spectrophotometric standard stars in the first order CCD modes. For point-source spectra at the standard reference position at the CCD center, CTE losses as large as 30% are corrected to within approx.1% RMS after application of the algorithm presented here, rendering the Poisson noise associated with the source detection itself to be the dominant contributor to the total flux calibration uncertainty.

  10. Open-Source Automated Mapping Four-Point Probe.

    PubMed

    Chandra, Handy; Allen, Spencer W; Oberloier, Shane W; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M

    2017-01-26

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas.

  11. Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.

    PubMed

    Feng, Bing; Zeng, Gengsheng L

    2014-04-10

    A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.

  12. Real-time Estimation of Fault Rupture Extent for Recent Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Yamada, M.; Mori, J. J.

    2009-12-01

    Current earthquake early warning systems assume point source models for the rupture. However, for large earthquakes, the fault rupture length can be of the order of tens to hundreds of kilometers, and the prediction of ground motion at a site requires the approximated knowledge of the rupture geometry. Early warning information based on a point source model may underestimate the ground motion at a site, if a station is close to the fault but distant from the epicenter. We developed an empirical function to classify seismic records into near-source (NS) or far-source (FS) records based on the past strong motion records (Yamada et al., 2007). Here, we defined the near-source region as an area with a fault rupture distance less than 10km. If we have ground motion records at a station, the probability that the station is located in the near-source region is; P = 1/(1+exp(-f)) f = 6.046log10(Za) + 7.885log10(Hv) - 27.091 where Za and Hv denote the peak values of the vertical acceleration and horizontal velocity, respectively. Each observation provides the probability that the station is located in near-source region, so the resolution of the proposed method depends on the station density. The information of the fault rupture location is a group of points where the stations are located. However, for practical purposes, the 2-dimensional configuration of the fault is required to compute the ground motion at a site. In this study, we extend the methodology of NS/FS classification to characterize 2-dimensional fault geometries and apply them to strong motion data observed in recent large earthquakes. We apply a cosine-shaped smoothing function to the probability distribution of near-source stations, and convert the point fault location to 2-dimensional fault information. The estimated rupture geometry for the 2007 Niigata-ken Chuetsu-oki earthquake 10 seconds after the origin time is shown in Figure 1. Furthermore, we illustrate our method with strong motion data of the 2007 Noto-hanto earthquake, 2008 Iwate-Miyagi earthquake, and 2008 Wenchuan earthquake. The on-going rupture extent can be estimated for all datasets as the rupture propagates. For earthquakes with magnitude about 7.0, the determination of the fault parameters converges to the final geometry within 10 seconds.

  13. Effects of Grid Resolution on Modeled Air Pollutant Concentrations Due to Emissions from Large Point Sources: Case Study during KORUS-AQ 2016 Campaign

    NASA Astrophysics Data System (ADS)

    Ju, H.; Bae, C.; Kim, B. U.; Kim, H. C.; Kim, S.

    2017-12-01

    Large point sources in the Chungnam area received a nation-wide attention in South Korea because the area is located southwest of the Seoul Metropolitan Area whose population is over 22 million and the summertime prevalent winds in the area is northeastward. Therefore, emissions from the large point sources in the Chungnam area were one of the major observation targets during the KORUS-AQ 2016 including aircraft measurements. In general, horizontal grid resolutions of eulerian photochemical models have profound effects on estimated air pollutant concentrations. It is due to the formulation of grid models; that is, emissions in a grid cell will be assumed to be mixed well under planetary boundary layers regardless of grid cell sizes. In this study, we performed series of simulations with the Comprehensive Air Quality Model with eXetension (CAMx). For 9-km and 3-km simulations, we used meteorological fields obtained from the Weather Research and Forecast model while utilizing the "Flexi-nesting" option in the CAMx for the 1-km simulation. In "Flexi-nesting" mode, CAMx interpolates or assigns model inputs from the immediate parent grid. We compared modeled concentrations with ground observation data as well as aircraft measurements to quantify variations of model bias and error depending on horizontal grid resolutions.

  14. Giant Steps in Cefalù

    NASA Astrophysics Data System (ADS)

    Jeffery, David J.; Mazzali, Paolo A.

    2007-08-01

    Giant steps is a technique to accelerate Monte Carlo radiative transfer in optically-thick cells (which are isotropic and homogeneous in matter properties and into which astrophysical atmospheres are divided) by greatly reducing the number of Monte Carlo steps needed to propagate photon packets through such cells. In an optically-thick cell, packets starting from any point (which can be regarded a point source) well away from the cell wall act essentially as packets diffusing from the point source in an infinite, isotropic, homogeneous atmosphere. One can replace many ordinary Monte Carlo steps that a packet diffusing from the point source takes by a randomly directed giant step whose length is slightly less than the distance to the nearest cell wall point from the point source. The giant step is assigned a time duration equal to the time for the RMS radius for a burst of packets diffusing from the point source to have reached the giant step length. We call assigning giant-step time durations this way RMS-radius (RMSR) synchronization. Propagating packets by series of giant steps in giant-steps random walks in the interiors of optically-thick cells constitutes the technique of giant steps. Giant steps effectively replaces the exact diffusion treatment of ordinary Monte Carlo radiative transfer in optically-thick cells by an approximate diffusion treatment. In this paper, we describe the basic idea of giant steps and report demonstration giant-steps flux calculations for the grey atmosphere. Speed-up factors of order 100 are obtained relative to ordinary Monte Carlo radiative transfer. In practical applications, speed-up factors of order ten and perhaps more are possible. The speed-up factor is likely to be significantly application-dependent and there is a trade-off between speed-up and accuracy. This paper and past work suggest that giant-steps error can probably be kept to a few percent by using sufficiently large boundary-layer optical depths while still maintaining large speed-up factors. Thus, giant steps can be characterized as a moderate accuracy radiative transfer technique. For many applications, the loss of some accuracy may be a tolerable price to pay for the speed-ups gained by using giant steps.

  15. Method of Making Large Area Nanostructures

    NASA Technical Reports Server (NTRS)

    Marks, Alvin M.

    1995-01-01

    A method which enables the high speed formation of nanostructures on large area surfaces is described. The method uses a super sub-micron beam writer (Supersebter). The Supersebter uses a large area multi-electrode (Spindt type emitter source) to produce multiple electron beams simultaneously scanned to form a pattern on a surface in an electron beam writer. A 100,000 x 100,000 array of electron point sources, demagnified in a long electron beam writer to simultaneously produce 10 billion nano-patterns on a 1 meter squared surface by multi-electron beam impact on a 1 cm squared surface of an insulating material is proposed.

  16. Strategy for Texture Management in Metals Additive Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  17. Strategy for Texture Management in Metals Additive Manufacturing

    DOE PAGES

    Kirka, Michael M.; Lee, Yousub; Greeley, Duncan A.; ...

    2017-01-31

    Additive manufacturing (AM) technologies have long been recognized for their ability to fabricate complex geometric components directly from models conceptualized through computers, allowing for complicated designs and assemblies to be fabricated at lower costs, with shorter time to market, and improved function. Lacking behind the design complexity aspect is the ability to fully exploit AM processes for control over texture within AM components. Currently, standard heat-fill strategies utilized in AM processes result in largely columnar grain structures. Here, we propose a point heat source fill for the electron beam melting (EBM) process through which the texture in AM materials canmore » be controlled. Using this point heat source strategy, the ability to form either columnar or equiaxed grain structures upon solidification through changes in the process parameters associated with the point heat source fill is demonstrated for the nickel-base superalloy, Inconel 718. Mechanically, the material is demonstrated to exhibit either anisotropic properties for the columnar-grained material fabricated through using the standard raster scan of the EBM process or isotropic properties for the equiaxed material fabricated using the point heat source fill.« less

  18. Dissolved organic matter fluorescence at wavelength 275/342 nm as a key indicator for detection of point-source contamination in a large Chinese drinking water lake.

    PubMed

    Zhou, Yongqiang; Jeppesen, Erik; Zhang, Yunlin; Shi, Kun; Liu, Xiaohan; Zhu, Guangwei

    2016-02-01

    Surface drinking water sources have been threatened globally and there have been few attempts to detect point-source contamination in these waters using chromophoric dissolved organic matter (CDOM) fluorescence. To determine the optimal wavelength derived from CDOM fluorescence as an indicator of point-source contamination in drinking waters, a combination of field campaigns in Lake Qiandao and a laboratory wastewater addition experiment was used. Parallel factor (PARAFAC) analysis identified six components, including three humic-like, two tryptophan-like, and one tyrosine-like component. All metrics showed strong correlation with wastewater addition (r(2) > 0.90, p < 0.0001). Both the field campaigns and the laboratory contamination experiment revealed that CDOM fluorescence at 275/342 nm was the most responsive wavelength to the point-source contamination in the lake. Our results suggest that pollutants in Lake Qiandao had the highest concentrations in the river mouths of upstream inflow tributaries and the single wavelength at 275/342 nm may be adapted for online or in situ fluorescence measurements as an early warning of contamination events. This study demonstrates the potential utility of CDOM fluorescence to monitor water quality in surface drinking water sources. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Berenji, B.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Bregeon, J.; Bruel, P.; Buehler, R.; Cameron, R. A.; Caputo, R.; Caraveo, P. A.; Cavazzuti, E.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Conrad, J.; Costantin, D.; D’Ammando, F.; de Palma, F.; Digel, S. W.; Di Lalla, N.; Di Mauro, M.; Di Venere, L.; Favuzzi, C.; Fegan, S. J.; Focke, W. B.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giordano, F.; Giroletti, M.; Green, D.; Grenier, I. A.; Guillemot, L.; Guiriec, S.; Horan, D.; Jóhannesson, G.; Johnson, C.; Kensei, S.; Kocevski, D.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J. D.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mazziotta, M. N.; McEnery, J. E.; Meyer, M.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monzani, M. E.; Moretti, E.; Morselli, A.; Moskalenko, I. V.; Negro, M.; Nuss, E.; Ojha, R.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Palatiello, M.; Paliya, V. S.; Paneque, D.; Persic, M.; Pesce-Rollins, M.; Piron, F.; Principe, G.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sánchez-Conde, M.; Sgrò, C.; Siskind, E. J.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Thayer, J. G.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Valverde, J.; Vianello, G.; Wood, K.; Wood, M.; Zaharijas, G.

    2018-04-01

    Black holes with masses below approximately 1015 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 1011 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth, {\\dot{ρ }}PBH}< 7.2× {10}3 {pc}}-3 {yr}}-1. This limit is similar to the limits obtained with ground-based gamma-ray observatories.

  20. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Atwood, W. B.; Baldini, L.; ...

    2018-04-10

    Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less

  1. Search for Gamma-Ray Emission from Local Primordial Black Holes with the Fermi Large Area Telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Atwood, W. B.; Baldini, L.

    Black holes with masses below approximately 10 15 g are expected to emit gamma-rays with energies above a few tens of MeV, which can be detected by the Fermi Large Area Telescope (LAT). Although black holes with these masses cannot be formed as a result of stellar evolution, they may have formed in the early universe and are therefore called primordial black holes (PBHs). Previous searches for PBHs have focused on either short-timescale bursts or the contribution of PBHs to the isotropic gamma-ray emission. We show that, in cases of individual PBHs, the Fermi-LAT is most sensitive to PBHs with temperatures above approximately 16 GeV and masses 6 × 10 11 g, which it can detect out to a distance of about 0.03 pc. These PBHs have a remaining lifetime of months to years at the start of the Fermi mission. They would appear as potentially moving point sources with gamma-ray emission that become spectrally harder and brighter with time until the PBH completely evaporates. In this paper, we develop a new algorithm to detect the proper motion of gamma-ray point sources, and apply it to 318 unassociated point sources at a high galactic latitude in the third Fermi-LAT source catalog. None of the unassociated point sources with spectra consistent with PBH evaporation show significant proper motion. Finally, using the nondetection of PBH candidates, we derive a 99% confidence limit on the PBH evaporation rate in the vicinity of Earth,more » $${\\dot{\\rho }}_{\\mathrm{PBH}}\\lt 7.2\\times {10}^{3}\\ {\\mathrm{pc}}^{-3}\\,{\\mathrm{yr}}^{-1}$$. This limit is similar to the limits obtained with ground-based gamma-ray observatories.« less

  2. Speciated atmospheric mercury and its potential source in Guiyang, China

    NASA Astrophysics Data System (ADS)

    Fu, Xuewu; Feng, Xinbin; Qiu, Guangle; Shang, Lihai; Zhang, Hui

    2011-08-01

    Speciated atmospheric mercury (Hg) including gaseous elemental mercury (GEM), particulate Hg (PHg), and reactive gaseous Hg (RGM) were continuously measured at an urban site in Guiyang city, southwest China from August to December 2009. The averaged concentrations for GEM, PHg, and RGM were 9.72 ± 10.2 ng m -3, 368 ± 676 pg m -3, and 35.7 ± 43.9 pg m -3, respectively, which were all highly elevated compared to observations at urban sites in Europe and North America. GEM and PHg were characterized by similar monthly and diurnal patterns, with elevated levels in cold months and nighttime, respectively. In contrast, RGM did not exhibit clear monthly and diurnal variations. The variations of GEM, PHg, and RGM indicate the sampling site was significantly impacted by sources in the city municipal area. Sources identification implied that both residential coal burning and large point sources were responsible to the elevated GEM and PHg concentrations; whereas point sources were the major contributors to elevated RGM concentrations. Point sources played a different role in regulating GEM, PHg, and RGM concentrations. Aside from residential emissions, PHg levels was mostly affected by small-scale coal combustion boilers situated to the east of the sampling site, which were scarcely equipped or lacking particulate control devices; whereas point sources situated to the east, southeast, and southwest of the sampling played an important role on the distribution of atmospheric GEM and RGM.

  3. Correlation between Grade Point Averages and Student Evaluation of Teaching Scores: Taking a Closer Look

    ERIC Educational Resources Information Center

    Griffin, Tyler J.; Hilton, John, III.; Plummer, Kenneth; Barret, Devynne

    2014-01-01

    One of the most contentious potential sources of bias is whether instructors who give higher grades receive higher ratings from students. We examined the grade point averages (GPAs) and student ratings across 2073 general education religion courses at a large private university. A moderate correlation was found between GPAs and student evaluations…

  4. Open-Source Automated Mapping Four-Point Probe

    PubMed Central

    Chandra, Handy; Allen, Spencer W.; Oberloier, Shane W.; Bihari, Nupur; Gwamuri, Jephias; Pearce, Joshua M.

    2017-01-01

    Scientists have begun using self-replicating rapid prototyper (RepRap) 3-D printers to manufacture open source digital designs of scientific equipment. This approach is refined here to develop a novel instrument capable of performing automated large-area four-point probe measurements. The designs for conversion of a RepRap 3-D printer to a 2-D open source four-point probe (OS4PP) measurement device are detailed for the mechanical and electrical systems. Free and open source software and firmware are developed to operate the tool. The OS4PP was validated against a wide range of discrete resistors and indium tin oxide (ITO) samples of different thicknesses both pre- and post-annealing. The OS4PP was then compared to two commercial proprietary systems. Results of resistors from 10 to 1 MΩ show errors of less than 1% for the OS4PP. The 3-D mapping of sheet resistance of ITO samples successfully demonstrated the automated capability to measure non-uniformities in large-area samples. The results indicate that all measured values are within the same order of magnitude when compared to two proprietary measurement systems. In conclusion, the OS4PP system, which costs less than 70% of manual proprietary systems, is comparable electrically while offering automated 100 micron positional accuracy for measuring sheet resistance over larger areas. PMID:28772471

  5. VizieR Online Data Catalog: First Fermi-LAT Inner Galaxy point source catalog (Ajello+, 2016)

    NASA Astrophysics Data System (ADS)

    Ajello, M.; Albert, A.; Atwood, W. B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Brandt, T. J.; Bregeon, J.; Bruel, P.; Buehler, R.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Caputo, R.; Caragiulo, M.; Caraveo, P. A.; Cecchi, C.; Chekhtman, A.; Chiang, J.; Chiaro, G.; Ciprini, S.; Cohen-Tanugi, J.; Cominsky, L. R.; Conrad, J.; Cutini, S.; D'Ammando, F.; de Angelis, A.; de Palma, F.; Desiante, R.; di Venere, L.; Drell, P. S.; Favuzzi, C.; Ferrara, E. C.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Gomez-Vargas, G. A.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Harding, A. K.; Hewitt, J. W.; Hill, A. B.; Horan, D.; Jogler, T.; Johannesson, G.; Johnson, A. S.; Kamae, T.; Karwin, C.; Knodlseder, J.; Kuss, M.; Larsson, S.; Latronico, L.; Li, J.; Li, L.; Longo, F.; Loparco, F.; Lovellette, M. N.; Lubrano, P.; Magill, J.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mayer, M.; Mazziotta, M. N.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nuss, E.; Ohno, M.; Ohsugi, T.; Omodei, N.; Orlando, E.; Ormes, J. F.; Paneque, D.; Pesce-Rollins, M.; Piron, F.; Pivato, G.; Porter, T. A.; Raino, S.; Rando, R.; Razzano, M.; Reimer, A.; Reimer, O.; Ritz, S.; Sanchez-Conde, M.; Parkinson, P. M. S.; Sgro, C.; Siskind, E. J.; Smith, D. A.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tajima, H.; Takahashi, H.; Thayer, J. B.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Vianello, G.; Winer, B. L.; Wood, K. S.; Zaharijas, G.; Zimmer, S.

    2018-01-01

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100GeV from a 15°x15° region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the γ-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner ~1kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15°x15° region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point Source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC are used to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM. (2 data files).

  6. Mapping algorithm for freeform construction using non-ideal light sources

    NASA Astrophysics Data System (ADS)

    Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.

    2015-09-01

    Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.

  7. Evaluating Air-Quality Models: Review and Outlook.

    NASA Astrophysics Data System (ADS)

    Weil, J. C.; Sykes, R. I.; Venkatram, A.

    1992-10-01

    Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.

  8. NuSTAR view of the central region of M31

    NASA Astrophysics Data System (ADS)

    Stiele, H.; Kong, A. K. H.

    2018-04-01

    Our neighbouring large spiral galaxy, the Andromeda galaxy (M31 or NGC 224), is an ideal target to study the X-ray source population of a nearby galaxy. NuSTAR observed the central region of M31 in 2015 and allows studying the population of X-ray point sources at energies higher than 10 keV. Based on the source catalogue of the large XMM-Newton survey of M31, we identified counterparts to the XMM-Newton sources in the NuSTAR data. The NuSTAR data only contain sources of a brightness comparable (or even brighter) than the selected sources that have been detected in XMM-Newton data. We investigate hardness ratios, spectra, and long-term light curves of individual sources obtained from NuSTAR data. Based on our spectral studies, we suggest four sources as possible X-ray binary candidates. The long-term light curves of seven sources that have been observed more than once show low (but significant) variability.

  9. Demonstration of Technologies for Remote and in Situ Sensing of Atmospheric Methane Abundances - a Controlled Release Experiment

    NASA Astrophysics Data System (ADS)

    Aubrey, A. D.; Thorpe, A. K.; Christensen, L. E.; Dinardo, S.; Frankenberg, C.; Rahn, T. A.; Dubey, M.

    2013-12-01

    It is critical to constrain both natural and anthropogenic sources of methane to better predict the impact on global climate change. Critical technologies for this assessment include those that can detect methane point and concentrated diffuse sources over large spatial scales. Airborne spectrometers can potentially fill this gap for large scale remote sensing of methane while in situ sensors, both ground-based and mounted on aerial platforms, can monitor and quantify at small to medium spatial scales. The Jet Propulsion Laboratory (JPL) and collaborators recently conducted a field test located near Casper, WY, at the Rocky Mountain Oilfield Test Center (RMOTC). These tests were focused on demonstrating the performance of remote and in situ sensors for quantification of point-sourced methane. A series of three controlled release points were setup at RMOTC and over the course of six experiment days, the point source flux rates were varied from 50 LPM to 2400 LPM (liters per minute). During these releases, in situ sensors measured real-time methane concentration from field towers (downwind from the release point) and using a small Unmanned Aerial System (sUAS) to characterize spatiotemporal variability of the plume structure. Concurrent with these methane point source controlled releases, airborne sensor overflights were conducted using three aircraft. The NASA Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) participated with a payload consisting of a Fourier Transform Spectrometer (FTS) and an in situ methane sensor. Two imaging spectrometers provided assessment of optical and thermal infrared detection of methane plumes. The AVIRIS-next generation (AVIRIS-ng) sensor has been demonstrated for detection of atmospheric methane in the short wave infrared region, specifically using the absorption features at ~2.3 μm. Detection of methane in the thermal infrared region was evaluated by flying the Hyperspectral Thermal Emission Spectrometer (HyTES), retrievals which interrogate spectral features in the 7.5 to 8.5 μm region. Here we discuss preliminary results from the JPL activities during the RMOTC controlled release experiment, including capabilities of airborne sensors for total columnar atmospheric methane detection and comparison to results from ground measurements and dispersion models. Potential application areas for these remote sensing technologies include assessment of anthropogenic and natural methane sources over wide spatial scales that represent significant unconstrained factors to the global methane budget.

  10. Galactic Starburst NGC 3603 from X-Rays to Radio

    NASA Technical Reports Server (NTRS)

    Moffat, A. F. J.; Corcoran, M. F.; Stevens, I. R.; Skalkowski, G.; Marchenko, S. V.; Muecke, A.; Ptak, A.; Koribalski, B. S.; Brenneman, L.; Mushotzky, R.; hide

    2002-01-01

    NGC 3603 is the most massive and luminous visible starburst region in the Galaxy. We present the first Chandra/ACIS-I X-ray image and spectra of this dense, exotic object, accompanied by deep cm-wavelength ATCA radio image at similar or less than 1 inch spatial resolution, and HST/ground-based optical data. At the S/N greater than 3 level, Chandra detects several hundred X-ray point sources (compared to the 3 distinct sources seen by ROSAT). At least 40 of these sources are definitely associated with optically identified cluster O and WR type members, but most are not. A diffuse X-ray component is also seen out to approximately 2 feet (4 pc) form the center, probably arising mainly from the large number of merging/colliding hot stellar winds and/or numerous faint cluster sources. The point-source X-ray fluxes generally increase with increasing bolometric brightnesses of the member O/WR stars, but with very large scatter. Some exceptionally bright stellar X-ray sources may be colliding wind binaries. The radio image shows (1) two resolved sources, one definitely non-thermal, in the cluster core near where the X-ray/optically brightest stars with the strongest stellar winds are located, (2) emission from all three known proplyd-like objects (with thermal and non-thermal components, and (3) many thermal sources in the peripheral regions of triggered star-formation. Overall, NGC 3603 appears to be a somewhat younger and hotter, scaled-down version of typical starbursts found in other galaxies.

  11. Using SPARROW to Model Total Nitrogen Sources, and Transport in Rivers and Streams of California and Adjacent States, U.S.A

    NASA Astrophysics Data System (ADS)

    Saleh, D.; Domagalski, J. L.

    2012-12-01

    Sources and factors affecting the transport of total nitrogen are being evaluated for a study area that covers most of California and some areas in Oregon and Nevada, by using the SPARROW model (SPAtially Referenced Regression On Watershed attributes) developed by the U.S. Geological Survey. Mass loads of total nitrogen calculated for monitoring sites at stream gauging stations are regressed against land-use factors affecting nitrogen transport, including fertilizer use, recharge, atmospheric deposition, stream characteristics, and other factors to understand how total nitrogen is transported under average conditions. SPARROW models have been used successfully in other parts of the country to understand how nutrients are transported, and how management strategies can be formulated, such as with Total Maximum Daily Load (TMDL) assessments. Fertilizer use, atmospheric deposition, and climatic data were obtained for 2002, and loads for that year were calculated for monitored streams and point sources (mostly from wastewater treatment plants). The stream loads were calculated by using the adjusted maximum likelihood estimation method (AMLE). River discharge and nitrogen concentrations were de-trended in these calculations in order eliminate the effect of temporal changes on stream load. Effluent discharge information as well as total nitrogen concentrations from point sources were obtained from USEPA databases and from facility records. The model indicates that atmospheric deposition and fertilizer use account for a large percentage of the total nitrogen load in many of the larger watersheds throughout the study area. Point sources, on the other hand, are generally localized around large cities, are considered insignificant sources, and account for a small percentage of the total nitrogen loads throughout the study area.

  12. Incompatible Land Uses and the Topology of Cumulative Risk

    NASA Astrophysics Data System (ADS)

    Lejano, Raul P.; Smith, C. Scott

    2006-02-01

    The extensive literature on environmental justice has, by now, well defined the essential ingredients of cumulative risk, namely, incompatible land uses and vulnerability. Most problematic is the case when risk is produced by a large aggregation of small sources of air toxics. In this article, we test these notions in an area of Southern California, Southeast Los Angeles (SELA), which has come to be known as Asthmatown. Developing a rapid risk mapping protocol, we scan the neighborhood for small potential sources of air toxics and find, literally, hundreds of small point sources within a 2-mile radius, interspersed with residences. We also map the estimated cancer risks and noncancer hazard indices across the landscape. We find that, indeed, such large aggregations of even small, nondominant sources of air toxics can produce markedly elevated levels of risk. In this study, the risk profiles show additional cancer risks of up to 800 in a million and noncancer hazard indices of up to 200 in SELA due to the agglomeration of small point sources. This is significant (for example, estimates of the average regional point-source-related cancer risk range from 125 to 200 in a million). Most importantly, if we were to talk about the risk contour as if they were geological structures, we would observe not only a handful of distinct peaks, but a general “mountain range” running all throughout the study area, which underscores the ubiquity of risk in SELA. Just as cumulative risk has deeply embedded itself into the fabric of the place, so, too, must intervention seek to embed strategies into the institutions and practices of SELA. This has implications for advocacy, as seen in a recently initiated participatory action research project aimed at building health research capacities into the community in keeping with an ethic of care.

  13. Handheld low-temperature plasma probe for portable "point-and-shoot" ambient ionization mass spectrometry.

    PubMed

    Wiley, Joshua S; Shelley, Jacob T; Cooks, R Graham

    2013-07-16

    We describe a handheld, wireless low-temperature plasma (LTP) ambient ionization source and its performance on a benchtop and a miniature mass spectrometer. The source, which is inexpensive to build and operate, is battery-powered and utilizes miniature helium cylinders or air as the discharge gas. Comparison of a conventional, large-scale LTP source against the handheld LTP source, which uses less helium and power than the large-scale version, revealed that the handheld source had similar or slightly better analytical performance. Another advantage of the handheld LTP source is the ability to quickly interrogate a gaseous, liquid, or solid sample without requiring any setup time. A small, 7.4-V Li-polymer battery is able to sustain plasma for 2 h continuously, while the miniature helium cylinder supplies gas flow for approximately 8 continuous hours. Long-distance ion transfer was achieved for distances up to 1 m.

  14. THE CHANDRA SURVEY OF THE COSMOS FIELD. II. SOURCE DETECTION AND PHOTOMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puccetti, S.; Vignali, C.; Cappelluti, N.

    2009-12-01

    The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that covers the central contiguous {approx}0.92 deg{sup 2} of the COSMOS field. C-COSMOS is the result of a complex tiling, with every position being observed in up to six overlapping pointings (four overlapping pointings in most of the central {approx}0.45 deg{sup 2} area with the best exposure, and two overlapping pointings in most of the surrounding area, covering an additional {approx}0.47 deg{sup 2}). Therefore, the full exploitation of the C-COSMOS data requires a dedicated and accurate analysis focused on three main issues: (1) maximizing the sensitivity when themore » point-spread function (PSF) changes strongly among different observations of the same source (from {approx}1 arcsec up to {approx}10 arcsec half-power radius); (2) resolving close pairs; and (3) obtaining the best source localization and count rate. We present here our treatment of four key analysis items: source detection, localization, photometry, and survey sensitivity. Our final procedure consists of a two step procedure: (1) a wavelet detection algorithm to find source candidates and (2) a maximum likelihood PSF fitting algorithm to evaluate the source count rates and the probability that each source candidate is a fluctuation of the background. We discuss the main characteristics of this procedure, which was the result of detailed comparisons between different detection algorithms and photometry tools, calibrated with extensive and dedicated simulations.« less

  15. Precision blackbody sources for radiometric standards.

    PubMed

    Sapritsky, V I; Khlevnoy, B B; Khromchenko, V B; Lisiansky, B E; Mekhontsev, S N; Melenevsky, U A; Morozova, S P; Prokhorov, A V; Samoilov, L N; Shapoval, V I; Sudarev, K A; Zelener, M F

    1997-08-01

    The precision blackbody sources developed at the All-Russian Institute for Optical and Physical Measurements (Moscow, Russia) and their characteristics are analyzed. The precision high-temperature graphite blackbody BB22p, large-area high-temperature pyrolytic graphite blackbody BB3200pg, middle-temperature graphite blackbody BB2000, low-temperature blackbody BB300, and gallium fixed-point blackbody BB29gl and their characteristics are described.

  16. Design and Evaluation of Large-Aperture Gallium Fixed-Point Blackbody

    NASA Astrophysics Data System (ADS)

    Khromchenko, V. B.; Mekhontsev, S. N.; Hanssen, L. M.

    2009-02-01

    To complement existing water bath blackbodies that now serve as NIST primary standard sources in the temperature range from 15 °C to 75 °C, a gallium fixed-point blackbody has been recently built. The main objectives of the project included creating an extended-area radiation source with a target emissivity of 0.9999 capable of operating either inside a cryo-vacuum chamber or in a standard laboratory environment. A minimum aperture diameter of 45 mm is necessary for the calibration of radiometers with a collimated input geometry or large spot size. This article describes the design and performance evaluation of the gallium fixed-point blackbody, including the calculation and measurements of directional effective emissivity, estimates of uncertainty due to the temperature drop across the interface between the pure metal and radiating surfaces, as well as the radiometrically obtained spatial uniformity of the radiance temperature and the melting plateau stability. Another important test is the measurement of the cavity reflectance, which was achieved by using total integrated scatter measurements at a laser wavelength of 10.6 μm. The result allows one to predict the performance under the low-background conditions of a cryo-chamber. Finally, results of the spectral radiance comparison with the NIST water-bath blackbody are provided. The experimental results are in good agreement with predicted values and demonstrate the potential of our approach. It is anticipated that, after completion of the characterization, a similar source operating at the water triple point will be constructed.

  17. Error Estimation and Compensation in Reduced Dynamic Models of Large Space Structures

    DTIC Science & Technology

    1987-04-23

    PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (if aplicable ) AFWAL I FIBRA F33615-84-C-3219 8c. ADDRESS (City, Stateand ZIP Code) ?0 SOURCE...10 Modes of the Full Model 15 5 Comparison of Various Reduced Models 18 6 Driving Point Mobilities , Wing Tip (Z55) 19 7 Driving Point Mobilities , Wing...Root Trailing Edge (Z19) 20 8 AMI Improvement 23 9 Frequency Domain Solution, Driving Point Mobilities , Wing Tip (Z55), RM1I 25 10 Frequency Domain

  18. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%

    PubMed Central

    Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.

    2016-01-01

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818

  19. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.

    PubMed

    Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M

    2016-09-13

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.

  20. Search for Extended Sources in the Galactic Plane Using Six Years of Fermi-Large Area Telescope Pass 8 Data above 10 GeV

    DOE PAGES

    Ackermann, M.; Ajello, M.; Baldini, L.; ...

    2017-07-10

    The spatial extension of a γ-ray source is an essential ingredient to determine its spectral properties, as well as its potential multiwavelength counterpart. The capability to spatially resolve γ-ray sources is greatly improved by the newly delivered Fermi-Large Area Telescope (LAT) Pass 8 event-level analysis, which provides a greater acceptance and an improved point-spread function, two crucial factors for the detection of extended sources. Here, we present a complete search for extended sources located within 7° from the Galactic plane, using 6 yr of Fermi-LAT data above 10 GeV. We find 46 extended sources and provide their morphological and spectralmore » characteristics. As a result, this constitutes the first catalog of hard Fermi-LAT extended sources, named the Fermi Galactic Extended Source Catalog, which allows a thorough study of the properties of the Galactic plane in the sub-TeV domain.« less

  1. Search for Extended Sources in the Galactic Plane Using Six Years of Fermi -Large Area Telescope Pass 8 Data above 10 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Buehler, R.; Ajello, M.

    The spatial extension of a γ -ray source is an essential ingredient to determine its spectral properties, as well as its potential multiwavelength counterpart. The capability to spatially resolve γ -ray sources is greatly improved by the newly delivered Fermi -Large Area Telescope (LAT) Pass 8 event-level analysis, which provides a greater acceptance and an improved point-spread function, two crucial factors for the detection of extended sources. Here, we present a complete search for extended sources located within 7° from the Galactic plane, using 6 yr of Fermi -LAT data above 10 GeV. We find 46 extended sources and providemore » their morphological and spectral characteristics. This constitutes the first catalog of hard Fermi -LAT extended sources, named the Fermi Galactic Extended Source Catalog, which allows a thorough study of the properties of the Galactic plane in the sub-TeV domain.« less

  2. Search for Extended Sources in the Galactic Plane Using Six Years of Fermi-Large Area Telescope Pass 8 Data above 10 GeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Baldini, L.

    The spatial extension of a γ-ray source is an essential ingredient to determine its spectral properties, as well as its potential multiwavelength counterpart. The capability to spatially resolve γ-ray sources is greatly improved by the newly delivered Fermi-Large Area Telescope (LAT) Pass 8 event-level analysis, which provides a greater acceptance and an improved point-spread function, two crucial factors for the detection of extended sources. Here, we present a complete search for extended sources located within 7° from the Galactic plane, using 6 yr of Fermi-LAT data above 10 GeV. We find 46 extended sources and provide their morphological and spectralmore » characteristics. As a result, this constitutes the first catalog of hard Fermi-LAT extended sources, named the Fermi Galactic Extended Source Catalog, which allows a thorough study of the properties of the Galactic plane in the sub-TeV domain.« less

  3. Search for Extended Sources in the Galactic Plane Using Six Years of Fermi-Large Area Telescope Pass 8 Data above 10 GeV

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Ajello, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Bissaldi, E.; Bloom, E. D.; Bonino, R.; Bottacini, E.; Brandt, T. J.; Bregeon, J.; Bruel, P.; Buehler, R.; Cameron, R. A.; Caragiulo, M.; Caraveo, P. A.; Castro, D.; Cavazzuti, E.; Cecchi, C.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiaro, G.; Ciprini, S.; Cohen, J. M.; Costantin, D.; Costanza, F.; Cutini, S.; D'Ammando, F.; de Palma, F.; Desiante, R.; Digel, S. W.; Di Lalla, N.; Di Mauro, M.; Di Venere, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Franckowiak, A.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Giglietto, N.; Giordano, F.; Giroletti, M.; Green, D.; Grenier, I. A.; Grondin, M.-H.; Guillemot, L.; Guiriec, S.; Harding, A. K.; Hays, E.; Hewitt, J. W.; Horan, D.; Hou, X.; Jóhannesson, G.; Kamae, T.; Kuss, M.; La Mura, G.; Larsson, S.; Lemoine-Goumard, M.; Li, J.; Longo, F.; Loparco, F.; Lubrano, P.; Magill, J. D.; Maldera, S.; Malyshev, D.; Manfreda, A.; Mazziotta, M. N.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Negro, M.; Nuss, E.; Ohsugi, T.; Omodei, N.; Orienti, M.; Orlando, E.; Ormes, J. F.; Paliya, V. S.; Paneque, D.; Perkins, J. S.; Persic, M.; Pesce-Rollins, M.; Petrosian, V.; Piron, F.; Porter, T. A.; Principe, G.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Reposeur, T.; Sgrò, C.; Simone, D.; Siskind, E. J.; Spada, F.; Spandre, G.; Spinelli, P.; Suson, D. J.; Tak, D.; Thayer, J. B.; Thompson, D. J.; Torres, D. F.; Tosti, G.; Troja, E.; Vianello, G.; Wood, K. S.; Wood, M.

    2017-07-01

    The spatial extension of a γ-ray source is an essential ingredient to determine its spectral properties, as well as its potential multiwavelength counterpart. The capability to spatially resolve γ-ray sources is greatly improved by the newly delivered Fermi-Large Area Telescope (LAT) Pass 8 event-level analysis, which provides a greater acceptance and an improved point-spread function, two crucial factors for the detection of extended sources. Here, we present a complete search for extended sources located within 7° from the Galactic plane, using 6 yr of Fermi-LAT data above 10 GeV. We find 46 extended sources and provide their morphological and spectral characteristics. This constitutes the first catalog of hard Fermi-LAT extended sources, named the Fermi Galactic Extended Source Catalog, which allows a thorough study of the properties of the Galactic plane in the sub-TeV domain.

  4. THE POPULATION OF COMPACT RADIO SOURCES IN THE ORION NEBULA CLUSTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forbrich, J.; Meingast, S.; Rivilla, V. M.

    We present a deep centimeter-wavelength catalog of the Orion Nebula Cluster (ONC), based on a 30 hr single-pointing observation with the Karl G. Jansky Very Large Array in its high-resolution A-configuration using two 1 GHz bands centered at 4.7 and 7.3 GHz. A total of 556 compact sources were detected in a map with a nominal rms noise of 3 μ Jy bm{sup −1}, limited by complex source structure and the primary beam response. Compared to previous catalogs, our detections increase the sample of known compact radio sources in the ONC by more than a factor of seven. The newmore » data show complex emission on a wide range of spatial scales. Following a preliminary correction for the wideband primary-beam response, we determine radio spectral indices for 170 sources whose index uncertainties are less than ±0.5. We compare the radio to the X-ray and near-infrared point-source populations, noting similarities and differences.« less

  5. Volatile organic compound emissions from the oil and natural gas industry in the Uinta Basin, Utah: point sources compared to ambient air composition

    NASA Astrophysics Data System (ADS)

    Warneke, C.; Geiger, F.; Edwards, P. M.; Dube, W.; Pétron, G.; Kofler, J.; Zahn, A.; Brown, S. S.; Graus, M.; Gilman, J.; Lerner, B.; Peischl, J.; Ryerson, T. B.; de Gouw, J. A.; Roberts, J. M.

    2014-05-01

    The emissions of volatile organic compounds (VOCs) associated with oil and natural gas production in the Uinta Basin, Utah were measured at a ground site in Horse Pool and from a NOAA mobile laboratory with PTR-MS instruments. The VOC compositions in the vicinity of individual gas and oil wells and other point sources such as evaporation ponds, compressor stations and injection wells are compared to the measurements at Horse Pool. High mixing ratios of aromatics, alkanes, cycloalkanes and methanol were observed for extended periods of time and short-term spikes caused by local point sources. The mixing ratios during the time the mobile laboratory spent on the well pads were averaged. High mixing ratios were found close to all point sources, but gas wells using dry-gas collection, which means dehydration happens at the well, were clearly associated with higher mixing ratios than other wells. Another large source was the flowback pond near a recently hydraulically re-fractured gas well. The comparison of the VOC composition of the emissions from the oil and natural gas wells showed that wet gas collection wells compared well with the majority of the data at Horse Pool and that oil wells compared well with the rest of the ground site data. Oil wells on average emit heavier compounds than gas wells. The mobile laboratory measurements confirm the results from an emissions inventory: the main VOC source categories from individual point sources are dehydrators, oil and condensate tank flashing and pneumatic devices and pumps. Raw natural gas is emitted from the pneumatic devices and pumps and heavier VOC mixes from the tank flashings.

  6. Large-Eddy Simulation of Chemically Reactive Pollutant Transport from a Point Source in Urban Area

    NASA Astrophysics Data System (ADS)

    Du, Tangzheng; Liu, Chun-Ho

    2013-04-01

    Most air pollutants are chemically reactive so using inert scalar as the tracer in pollutant dispersion modelling would often overlook their impact on urban inhabitants. In this study, large-eddy simulation (LES) is used to examine the plume dispersion of chemically reactive pollutants in a hypothetical atmospheric boundary layer (ABL) in neutral stratification. The irreversible chemistry mechanism of ozone (O3) titration is integrated into the LES model. Nitric oxide (NO) is emitted from an elevated point source in a rectangular spatial domain doped with O3. The LES results are compared well with the wind tunnel results available in literature. Afterwards, the LES model is applied to idealized two-dimensional (2D) street canyons of unity aspect ratio to study the behaviours of chemically reactive plume over idealized urban roughness. The relation among various time scales of reaction/turbulence and dimensionless number are analysed.

  7. Contaminant transport from point source on water surface in open channel flow with bed absorption

    NASA Astrophysics Data System (ADS)

    Guo, Jinlan; Wu, Xudong; Jiang, Weiquan; Chen, Guoqian

    2018-06-01

    Studying solute dispersion in channel flows is of significance for environmental and industrial applications. Two-dimensional concentration distribution for a most typical case of a point source release on the free water surface in a channel flow with bed absorption is presented by means of Chatwin's long-time asymptotic technique. Five basic characteristics of Taylor dispersion and vertical mean concentration distribution with skewness and kurtosis modifications are also analyzed. The results reveal that bed absorption affects both the longitudinal and vertical concentration distributions and causes the contaminant cloud to concentrate in the upper layer. Additionally, the cross-sectional concentration distribution shows an asymptotic Gaussian distribution at large time which is unaffected by the bed absorption. The vertical concentration distribution is found to be nonuniform even at large time. The obtained results are essential for practical implements with strict environmental standards.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hongfen, E-mail: wanghongfen11@163.com; Wang, Zhiqi; Chen, Shougang

    Molybdenum carbides with surfactants as carbon sources were prepared using the carbothermal reduction of the appropriate precursors (molybdenum oxides deposited on surfactant micelles) at 1023 K under hydrogen gas. The carburized products were characterized using scanning electron microscopy (SEM), X-ray diffraction and BET surface area measurements. From the SEM images, hollow microspherical and rod-like molybdenum carbides were observed. X-ray diffraction patterns showed that the annealing time of carburization had a large effect on the conversion of molybdenum oxides to molybdenum carbides. And BET surface area measurements indicated that the difference of carbon sources brought a big difference in specific surfacemore » areas of molybdenum carbides. - Graphical abstract: Molybdenum carbides having hollow microspherical and hollow rod-like morphologies that are different from the conventional monodipersed platelet-like morphologies. Highlights: Black-Right-Pointing-Pointer Molybdenum carbides were prepared using surfactants as carbon sources. Black-Right-Pointing-Pointer The kinds of surfactants affected the morphologies of molybdenum carbides. Black-Right-Pointing-Pointer The time of heat preservation at 1023 K affected the carburization process. Black-Right-Pointing-Pointer Molybdenum carbides with hollow structures had larger specific surface areas.« less

  9. Gamma-ray blazars: the combined AGILE and MAGIC views

    NASA Astrophysics Data System (ADS)

    Persic, M.; De Angelis, A.; Longo, F.; Tavani, M.

    The large FOV of the AGILE Gamma-Ray Imaging Detector (GRID), 2.5 sr, will allow the whole sky to be surveyed once every 10 days in the 30 MeV - 50 GeV energy band down to 0.05 Crab Units. This fact gives the opportunity of performing the first flux-limited, high-energy g-ray all-sky survey. The high Galactic latitude point-source population is expected to be largely dominated by blazars. Several tens of blazars are expected to be detected by AGILE (e.g., Costamante & Ghisellini 2002), about half of which accessible to the ground-based MAGIC Cherenkov telescope. The latter can then carry out pointed observations of this subset of AGILE sources in the 50GeV - 10TeV band. Given the comparable sensitivities of AGILE/GRID and MAGIC in adjacent energy bands where the emitted radiation is produced by the same (e.g., SSC) mechanism, we expect that most of these sources can be detected by MAGIC. We expect this broadband g-ray strategy to enable discovery by MAGIC of 10-15 previously unknown TeV blazars.

  10. DEVELOPMENT OF THE MODEL OF GALACTIC INTERSTELLAR EMISSION FOR STANDARD POINT-SOURCE ANALYSIS OF FERMI LARGE AREA TELESCOPE DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acero, F.; Ballet, J.; Ackermann, M.

    2016-04-01

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM), which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission producedmore » in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within ∼4° of the Galactic Center.« less

  11. Development of the Model of Galactic Interstellar Emission for Standard Point-Source Analysis of Fermi Large Area Telescope Data

    NASA Technical Reports Server (NTRS)

    Acero, F.; Ackermann, M.; Ajello, M.; Albert, A.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bellazzini, R.; Brandt, T. J.; hide

    2016-01-01

    Most of the celestial gamma rays detected by the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point-source and extended-source studies rely on the modeling of this diffuse emission for accurate characterization. Here, we describe the development of the Galactic Interstellar Emission Model (GIEM),which is the standard adopted by the LAT Collaboration and is publicly available. This model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse-Compton emission produced in the Galaxy. In the GIEM, we also include large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra confirm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20deg and we observe an enhanced emission toward their base extending in the north and south Galactic directions and located within approximately 4deg of the Galactic Center.

  12. A framework for emissions source apportionment in industrial areas: MM5/CALPUFF in a near-field application.

    PubMed

    Ghannam, K; El-Fadel, M

    2013-02-01

    This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.

  13. Method and apparatus for millimeter-wave detection of thermal waves for materials evaluation

    DOEpatents

    Gopalsami, Nachappa; Raptis, Apostolos C.

    1991-01-01

    A method and apparatus for generating thermal waves in a sample and for measuring thermal inhomogeneities at subsurface levels using millimeter-wave radiometry. An intensity modulated heating source is oriented toward a narrow spot on the surface of a material sample and thermal radiation in a narrow volume of material around the spot is monitored using a millimeter-wave radiometer; the radiometer scans the sample point-by-point and a computer stores and displays in-phase and quadrature phase components of thermal radiations for each point on the scan. Alternatively, an intensity modulated heating source is oriented toward a relatively large surface area in a material sample and variations in thermal radiation within the full field of an antenna array are obtained using an aperture synthesis radiometer technique.

  14. Uncertainty in gridded CO 2 emissions estimates

    DOE PAGES

    Hogue, Susannah; Marland, Eric; Andres, Robert J.; ...

    2016-05-19

    We are interested in the spatial distribution of fossil-fuel-related emissions of CO 2 for both geochemical and geopolitical reasons, but it is important to understand the uncertainty that exists in spatially explicit emissions estimates. Working from one of the widely used gridded data sets of CO 2 emissions, we examine the elements of uncertainty, focusing on gridded data for the United States at the scale of 1° latitude by 1° longitude. Uncertainty is introduced in the magnitude of total United States emissions, the magnitude and location of large point sources, the magnitude and distribution of non-point sources, and from themore » use of proxy data to characterize emissions. For the United States, we develop estimates of the contribution of each component of uncertainty. At 1° resolution, in most grid cells, the largest contribution to uncertainty comes from how well the distribution of the proxy (in this case population density) represents the distribution of emissions. In other grid cells, the magnitude and location of large point sources make the major contribution to uncertainty. Uncertainty in population density can be important where a large gradient in population density occurs near a grid cell boundary. Uncertainty is strongly scale-dependent with uncertainty increasing as grid size decreases. In conclusion, uncertainty for our data set with 1° grid cells for the United States is typically on the order of ±150%, but this is perhaps not excessive in a data set where emissions per grid cell vary over 8 orders of magnitude.« less

  15. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  16. Hemispherical breathing mode speaker using a dielectric elastomer actuator.

    PubMed

    Hosoya, Naoki; Baba, Shun; Maeda, Shingo

    2015-10-01

    Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.

  17. Luminosity limits for liquid argon calorimetry

    NASA Astrophysics Data System (ADS)

    J, Rutherfoord; B, Walker R.

    2012-12-01

    We have irradiated liquid argon ionization chambers with betas using high-activity Strontium-90 sources. The radiation environment is comparable to that in the liquid argon calorimeters which are part of the ATLAS detector installed at CERN's Large Hadron Collider. We measure the ionization current over a wide range of applied potential for two different source activities and for three different chamber gaps. These studies provide operating experience at exceptionally high ionization rates. We can operate these chambers either in the normal mode or in the space-charge limited regime and thereby determine the transition point between the two. From the transition point we indirectly extract the positive argon ion mobility.

  18. The brightest high-latitude 12-micron IRAS sources

    NASA Technical Reports Server (NTRS)

    Hacking, P.; Beichman, C.; Chester, T.; Neugebauer, G.; Emerson, J.

    1985-01-01

    The Infrared Astronomical Satellite (IRAS) Point Source catalog was searched for sources brighter than 28 Jy (0 mag) at 12 microns with absolute galactic latitude greater than 30 deg excluding the Large Magellanic Cloud. The search resulted in 269 sources, two of which are the galaxies NGC 1068 and M82. The remaining 267 sources are identified with, or have infrared color indices consistent with late-type stars some of which show evidence of circumstellar dust shells. Seven sources are previously uncataloged stars. K and M stars without circumstellar dust shells, M stars with circumstellar dust shells, and carbon stars occupy well-defined regions of infrared color-color diagrams.

  19. FAST's Discovery of a New Millisecond Pulsar (MSP) toward the Fermi-LAT unassociated source 3FGL J0318.1+0252

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Li, Di; Zhu, Weiwei; Zhang, Chengmin; Yan, Jun; Hou, Xian; Clark, Colin J.; Saz Parkinson, Pablo M.; Michelson, Peter F.; Ferrara, Elizabeth C.; Thompson, David J.; Smith, David A.; Ray, Paul S.; Kerr, Matthew; Shen, Zhiqiang; Wang, Na; Fermi-LAT Collaboration

    2018-04-01

    The Five hundred-meter Aperture Spherical radio Telescope (FAST), operated by the National Astronomical Observatories, Chinese Academy of Sciences, has discovered a radio millisecond pulsar (MSP) coincident with the unassociated gamma-ray source 3FGL J0318.1+0252 (Acero et al. 2015 ApJS, 218, 23), also known as FL8Y J0318.2+0254 in the recently released Fermi Large Area Telescope (LAT) 8-year Point Source List (FL8Y).

  20. Spitzer Photometry of Approximately 1 Million Stars in M31 and 15 Other Galaxies

    NASA Technical Reports Server (NTRS)

    Khan, Rubab

    2017-01-01

    We present Spitzer IRAC 3.6-8 micrometer and Multiband Imaging Photometer 24 micrometer point-source catalogs for M31 and 15 other mostly large, star-forming galaxies at distances approximately 3.5-14 Mpc, including M51, M83, M101, and NGC 6946. These catalogs contain approximately 1 million sources including approximately 859,000 in M31 and approximately 116,000 in the other galaxies. They were created following the procedures described in Khan et al. through a combination of pointspread function (PSF) fitting and aperture photometry. These data products constitute a resource to improve our understanding of the IR-bright (3.6-24 micrometer) point-source populations in crowded extragalactic stellar fields and to plan observations with the James Webb Space Telescope.

  1. An efficient method for removing point sources from full-sky radio interferometric maps

    NASA Astrophysics Data System (ADS)

    Berger, Philippe; Oppermann, Niels; Pen, Ue-Li; Shaw, J. Richard

    2017-12-01

    A new generation of wide-field radio interferometers designed for 21-cm surveys is being built as drift scan instruments allowing them to observe large fractions of the sky. With large numbers of antennas and frequency channels, the enormous instantaneous data rates of these telescopes require novel, efficient, data management and analysis techniques. The m-mode formalism exploits the periodicity of such data with the sidereal day, combined with the assumption of statistical isotropy of the sky, to achieve large computational savings and render optimal analysis methods computationally tractable. We present an extension to that work that allows us to adopt a more realistic sky model and treat objects such as bright point sources. We develop a linear procedure for deconvolving maps, using a Wiener filter reconstruction technique, which simultaneously allows filtering of these unwanted components. We construct an algorithm, based on the Sherman-Morrison-Woodbury formula, to efficiently invert the data covariance matrix, as required for any optimal signal-to-noise ratio weighting. The performance of our algorithm is demonstrated using simulations of a cylindrical transit telescope.

  2. The Massive Star-Forming Regions Omnibus X-Ray Catalog

    NASA Astrophysics Data System (ADS)

    Townsley, Leisa K.; Broos, Patrick S.; Garmire, Gordon P.; Bouwman, Jeroen; Povich, Matthew S.; Feigelson, Eric D.; Getman, Konstantin V.; Kuhn, Michael A.

    2014-07-01

    We present the Massive Star-forming Regions (MSFRs) Omnibus X-ray Catalog (MOXC), a compendium of X-ray point sources from Chandra/ACIS observations of a selection of MSFRs across the Galaxy, plus 30 Doradus in the Large Magellanic Cloud. MOXC consists of 20,623 X-ray point sources from 12 MSFRs with distances ranging from 1.7 kpc to 50 kpc. Additionally, we show the morphology of the unresolved X-ray emission that remains after the cataloged X-ray point sources are excised from the ACIS data, in the context of Spitzer and WISE observations that trace the bubbles, ionization fronts, and photon-dominated regions that characterize MSFRs. In previous work, we have found that this unresolved X-ray emission is dominated by hot plasma from massive star wind shocks. This diffuse X-ray emission is found in every MOXC MSFR, clearly demonstrating that massive star feedback (and the several-million-degree plasmas that it generates) is an integral component of MSFR physics.

  3. Inverse compton light source: a compact design proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deitrick, Kirsten Elizabeth

    In the last decade, there has been an increasing demand for a compact Inverse Compton Light Source (ICLS) which is capable of producing high-quality X-rays by colliding an electron beam and a high-quality laser. It is only in recent years when both SRF and laser technology have advanced enough that compact sources can approach the quality found at large installations such as the Advanced Photon Source at Argonne National Laboratory. Previously, X-ray sources were either high flux and brilliance at a large facility or many orders of magnitude lesser when produced by a bremsstrahlung source. A recent compact source wasmore » constructed by Lyncean Technologies using a storage ring to produce the electron beam used to scatter the incident laser beam. By instead using a linear accelerator system for the electron beam, a significant increase in X-ray beam quality is possible, though even subsequent designs also featuring a storage ring offer improvement. Preceding the linear accelerator with an SRF reentrant gun allows for an extremely small transverse emittance, increasing the brilliance of the resulting X-ray source. In order to achieve sufficiently small emittances, optimization was done regarding both the geometry of the gun and the initial electron bunch distribution produced off the cathode. Using double-spoke SRF cavities to comprise the linear accelerator allows for an electron beam of reasonable size to be focused at the interaction point, while preserving the low emittance that was generated by the gun. An aggressive final focusing section following the electron beam's exit from the accelerator produces the small spot size at the interaction point which results in an X-ray beam of high flux and brilliance. Taking all of these advancements together, a world class compact X-ray source has been designed. It is anticipated that this source would far outperform the conventional bremsstrahlung and many other compact ICLSs, while coming closer to performing at the levels found at large facilities than ever before. The design process, including the development between subsequent iterations, is presented here in detail, with the simulation results for this groundbreaking X-ray source.« less

  4. Performance testing of 3D point cloud software

    NASA Astrophysics Data System (ADS)

    Varela-González, M.; González-Jorge, H.; Riveiro, B.; Arias, P.

    2013-10-01

    LiDAR systems are being used widely in recent years for many applications in the engineering field: civil engineering, cultural heritage, mining, industry and environmental engineering. One of the most important limitations of this technology is the large computational requirements involved in data processing, especially for large mobile LiDAR datasets. Several software solutions for data managing are available in the market, including open source suites, however, users often unknown methodologies to verify their performance properly. In this work a methodology for LiDAR software performance testing is presented and four different suites are studied: QT Modeler, VR Mesh, AutoCAD 3D Civil and the Point Cloud Library running in software developed at the University of Vigo (SITEGI). The software based on the Point Cloud Library shows better results in the loading time of the point clouds and CPU usage. However, it is not as strong as commercial suites in working set and commit size tests.

  5. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    NASA Astrophysics Data System (ADS)

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2016-05-01

    In a previous paper, we pointed out that the gamma-ray source 3FGL J2212.5+\\linebreak 0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18-33 GeV and an annihilation cross section on the order of σ v ~ 10-26 cm3/s (for the representative case of annihilations to bbar b), similar to the values required to generate the Galactic Center gamma-ray excess.

  6. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    DOE PAGES

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    2016-05-23

    In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less

  7. Is the gamma-ray source 3FGL J2212.5+0703 a dark matter subhalo?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertoni, Bridget; Hooper, Dan; Linden, Tim

    In a previous study, we pointed out that the gamma-ray source 3FGL J2212.5+0703 shows evidence of being spatially extended. If a gamma-ray source without detectable emission at other wavelengths were unambiguously determined to be spatially extended, it could not be explained by known astrophysics, and would constitute a smoking gun for dark matter particles annihilating in a nearby subhalo. With this prospect in mind, we scrutinize the gamma-ray emission from this source, finding that it prefers a spatially extended profile over that of a single point-like source with 5.1σ statistical significance. We also use a large sample of active galactic nuclei and other known gamma-rays sources as a control group, confirming, as expected, that statistically significant extension is rare among such objects. We argue that the most likely (non-dark matter) explanation for this apparent extension is a pair of bright gamma-ray sources that serendipitously lie very close to each other, and estimate that there is a chance probability of ~2% that such a pair would exist somewhere on the sky. In the case of 3FGL J2212.5+0703, we test an alternative model that includes a second gamma-ray point source at the position of the radio source BZQ J2212+0646, and find that the addition of this source alongside a point source at the position of 3FGL J2212.5+0703 yields a fit of comparable quality to that obtained for a single extended source. If 3FGL J2212.5+0703 is a dark matter subhalo, it would imply that dark matter particles have a mass of ~18–33 GeV and an annihilation cross section on the order of σv ~ 10 –26 cm(3)/s (for the representative case of annihilations tomore » $$b\\bar{b}$$), similar to the values required to generate the Galactic Center gamma-ray excess.« less

  8. High-energy neutrinos from FR0 radio galaxies?

    NASA Astrophysics Data System (ADS)

    Tavecchio, F.; Righi, C.; Capetti, A.; Grandi, P.; Ghisellini, G.

    2018-04-01

    The sources responsible for the emission of high-energy (≳100 TeV) neutrinos detected by IceCube are still unknown. Among the possible candidates, active galactic nuclei with relativistic jets are often examined, since the outflowing plasma seems to offer the ideal environment to accelerate the required parent high-energy cosmic rays. The non-detection of single-point sources or - almost equivalently - the absence, in the IceCube events, of multiplets originating from the same sky position - constrains the cosmic density and the neutrino output of these sources, pointing to a numerous population of faint sources. Here we explore the possibility that FR0 radio galaxies, the population of compact sources recently identified in large radio and optical surveys and representing the bulk of radio-loud AGN population, can represent suitable candidates for neutrino emission. Modelling the spectral energy distribution of an FR0 radio galaxy recently associated with a γ-ray source detected by the Large Area Telescope onboard Fermi, we derive the physical parameters of its jet, in particular the power carried by it. We consider the possible mechanisms of neutrino production, concluding that pγ reactions in the jet between protons and ambient radiation is too inefficient to sustain the required output. We propose an alternative scenario, in which protons, accelerated in the jet, escape from it and diffuse in the host galaxy, producing neutrinos as a result of pp scattering with the interstellar gas, in strict analogy with the processes taking place in star-forming galaxies.

  9. A method on error analysis for large-aperture optical telescope control system

    NASA Astrophysics Data System (ADS)

    Su, Yanrui; Wang, Qiang; Yan, Fabao; Liu, Xiang; Huang, Yongmei

    2016-10-01

    For large-aperture optical telescope, compared with the performance of azimuth in the control system, arc second-level jitters exist in elevation under different speeds' working mode, especially low-speed working mode in the process of its acquisition, tracking and pointing. The jitters are closely related to the working speed of the elevation, resulting in the reduction of accuracy and low-speed stability of the telescope. By collecting a large number of measured data to the elevation, we do analysis on jitters in the time domain, frequency domain and space domain respectively. And the relation between jitter points and the leading speed of elevation and the corresponding space angle is concluded that the jitters perform as periodic disturbance in space domain and the period of the corresponding space angle of the jitter points is 79.1″ approximately. Then we did simulation, analysis and comparison to the influence of the disturbance sources, like PWM power level output disturbance, torque (acceleration) disturbance, speed feedback disturbance and position feedback disturbance on the elevation to find that the space periodic disturbance still exist in the elevation performance. It leads us to infer that the problems maybe exist in angle measurement unit. The telescope employs a 24-bit photoelectric encoder and we can calculate the encoder grating angular resolution as 79.1016'', which is as the corresponding angle value in the whole encoder system of one period of the subdivision signal. The value is approximately equal to the space frequency of the jitters. Therefore, the working elevation of the telescope is affected by subdivision errors and the period of the subdivision error is identical to the period of encoder grating angular. Through comprehensive consideration and mathematical analysis, that DC subdivision error of subdivision error sources causes the jitters is determined, which is verified in the practical engineering. The method that analyze error sources from time domain, frequency domain and space domain respectively has a very good role in guiding to find disturbance sources for large-aperture optical telescope.

  10. NuSTAR Hard X-Ray Survey of the Galactic Center Region. II. X-Ray Point Sources

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Mori, Kaya; Hailey, Charles J.; Nynka, Melania; Zhang, Shou; Gotthelf, Eric; Fornasini, Francesca M.; Krivonos, Roman; Bauer, Franz; Perez, Kerstin; hide

    2016-01-01

    We present the first survey results of hard X-ray point sources in the Galactic Center (GC) region by NuSTAR. We have discovered 70 hard (3-79 keV) X-ray point sources in a 0.6 deg(sup 2) region around Sgr?A* with a total exposure of 1.7 Ms, and 7 sources in the Sgr B2 field with 300 ks. We identify clear Chandra counterparts for 58 NuSTAR sources and assign candidate counterparts for the remaining 19. The NuSTAR survey reaches X-ray luminosities of approx. 4× and approx. 8 ×10(exp 32) erg/s at the GC (8 kpc) in the 3-10 and 10-40 keV bands, respectively. The source list includes three persistent luminous X-ray binaries (XBs) and the likely run-away pulsar called the Cannonball. New source-detection significance maps reveal a cluster of hard (>10 keV) X-ray sources near the Sgr A diffuse complex with no clear soft X-ray counterparts. The severe extinction observed in the Chandra spectra indicates that all the NuSTAR sources are in the central bulge or are of extragalactic origin. Spectral analysis of relatively bright NuSTAR sources suggests that magnetic cataclysmic variables constitute a large fraction (>40%-60%). Both spectral analysis and logN-logS distributions of the NuSTAR sources indicate that the X-ray spectra of the NuSTAR sources should have kT > 20 keV on average for a single temperature thermal plasma model or an average photon index of Lambda = 1.5-2 for a power-law model. These findings suggest that the GC X-ray source population may contain a larger fraction of XBs with high plasma temperatures than the field population.

  11. Brook Trout Back in Aaron Run

    EPA Pesticide Factsheets

    Following a series of acid mine drainage (AMD) projects funded largely by EPA’s Clean Water Act Section 319 non-point source program, the pH level in Aaron Run is meeting Maryland’s water quality standard – and the brook trout are back.

  12. LMC stellar X-ray sources observed with ROSAT. 1: X-ray data and search for optical counterparts

    NASA Technical Reports Server (NTRS)

    Schmidtke, P. C.; Cowley, A. P.; Frattare, L. M.; Mcgrath, T. K.

    1994-01-01

    Observations of Einstein Large Magellanic Cloud (LMC) X-ray point sources have been made with ROSAT's High-Resolution Imager to obtain accurate positions from which to search for optical counterparts. This paper is the first in a series reporting results of the ROSAT observations and subsequent optical observations. It includes the X-ray positions and fluxes, information about variability, optical finding charts for each source, a list of identified counterparts, and information about candidates which have been observed spectroscopically in each of the fields. Sixteen point sources were measured at a greater than 3 sigma level, while 15 other sources were either extended or less significant detections. About 50% of the sources are serendipitous detections (not found in previous surveys). More than half of the X-ray sources are variable. Sixteen of the sources have been optically identified or confirmed: six with foreground cool stars, four with Seyfert galaxies, two with signal-to-noise ratio (SNR) in the LMC, and four with peculiar hot LMC stars. Presumably the latter are all binaries, although only one (CAL 83) has been previously studied in detail.

  13. Compliance Groundwater Monitoring of Nonpoint Sources - Emerging Approaches

    NASA Astrophysics Data System (ADS)

    Harter, T.

    2008-12-01

    Groundwater monitoring networks are typically designed for regulatory compliance of discharges from industrial sites. There, the quality of first encountered (shallow-most) groundwater is of key importance. Network design criteria have been developed for purposes of determining whether an actual or potential, permitted or incidental waste discharge has had or will have a degrading effect on groundwater quality. The fundamental underlying paradigm is that such discharge (if it occurs) will form a distinct contamination plume. Networks that guide (post-contamination) mitigation efforts are designed to capture the shape and dynamics of existing, finite-scale plumes. In general, these networks extend over areas less than one to ten hectare. In recent years, regulatory programs such as the EU Nitrate Directive and the U.S. Clean Water Act have forced regulatory agencies to also control groundwater contamination from non-incidental, recharging, non-point sources, particularly agricultural sources (fertilizer, pesticides, animal waste application, biosolids application). Sources and contamination from these sources can stretch over several tens, hundreds, or even thousands of square kilometers with no distinct plumes. A key question in implementing monitoring programs at the local, regional, and national level is, whether groundwater monitoring can be effectively used as a landowner compliance tool, as is currently done at point-source sites. We compare the efficiency of such traditional site-specific compliance networks in nonpoint source regulation with various designs of regional nonpoint source monitoring networks that could be used for compliance monitoring. We discuss advantages and disadvantages of the site vs. regional monitoring approaches with respect to effectively protecting groundwater resources impacted by nonpoint sources: Site-networks provide a tool to enforce compliance by an individual landowner. But the nonpoint source character of the contamination and its typically large spatial extend requires extensive networks at an individual site to accurately and fairly monitor individual compliance. In contrast, regional networks seemingly fail to hold individual landowners accountable. But regional networks can effectively monitor large-scale impacts and water quality trends; and thus inform regulatory programs that enforce management practices tied to nonpoint source pollution. Regional monitoring networks for compliance purposes can face significant challenges in the implementation due to a regulatory and legal landscape that is exclusively structured to address point sources and individual liability, and due to the non-intensive nature of a regional monitoring program (lack of control of hot spots; lack of accountability of individual landowners).

  14. A New Source Biasing Approach in ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevill, Aaron M; Mosher, Scott W

    2012-01-01

    The ADVANTG code has been developed at Oak Ridge National Laboratory to generate biased sources and weight window maps for MCNP using the CADIS and FW-CADIS methods. In preparation for an upcoming RSICC release, a new approach for generating a biased source has been developed. This improvement streamlines user input and improves reliability. Previous versions of ADVANTG generated the biased source from ADVANTG input, writing an entirely new general fixed-source definition (SDEF). Because volumetric sources were translated into SDEF-format as a finite set of points, the user had to perform a convergence study to determine whether the number of sourcemore » points used accurately represented the source region. Further, the large number of points that must be written in SDEF-format made the MCNP input and output files excessively long and difficult to debug. ADVANTG now reads SDEF-format distributions and generates corresponding source biasing cards, eliminating the need for a convergence study. Many problems of interest use complicated source regions that are defined using cell rejection. In cell rejection, the source distribution in space is defined using an arbitrarily complex cell and a simple bounding region. Source positions are sampled within the bounding region but accepted only if they fall within the cell; otherwise, the position is resampled entirely. When biasing in space is applied to sources that use rejection sampling, current versions of MCNP do not account for the rejection in setting the source weight of histories, resulting in an 'unfair game'. This problem was circumvented in previous versions of ADVANTG by translating volumetric sources into a finite set of points, which does not alter the mean history weight ({bar w}). To use biasing parameters without otherwise modifying the original cell-rejection SDEF-format source, ADVANTG users now apply a correction factor for {bar w} in post-processing. A stratified-random sampling approach in ADVANTG is under development to automatically report the correction factor with estimated uncertainty. This study demonstrates the use of ADVANTG's new source biasing method, including the application of {bar w}.« less

  15. Application of point-diffraction interferometry to testing infrared imaging systems

    NASA Astrophysics Data System (ADS)

    Smartt, Raymond N.; Paez, Gonzalo

    2004-11-01

    Point-diffraction interferometry has found wide applications spanning much of the electromagnetic spectrum, including both near- and far-infrared wavelengths. Any telescopic, spectroscopic or other imaging system that converts an incident plane or spherical wavefront into an accessible point-like image can be tested at an intermediate image plane or at the principal image plane, in situ. Angular field performance can be similarly tested with inclined incident wavefronts. Any spatially coherent source can be used, but because of the available flux, it is most convenient to use a laser source. The simplicity of the test setup can allow testing of even large and complex fully-assembled systems. While purely reflective IR systems can be conveniently tested at visible wavelengths (apart from filters), catadioptric systems could be evaluated using an appropriate source and an IRPDI, with an imaging and recording system. PDI operating principles are briefly reviewed, and some more recent developments and interesting applications briefly discussed. Alternative approaches and recommended procedures for testing IR imaging systems, including the thermal IR, are suggested. An example of applying point-diffraction interferometry to testing a relatively low angular-resolution, optically complex IR telescopic system is presented.

  16. Fermi-Lat Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center

    NASA Technical Reports Server (NTRS)

    Ajello, M.; Albert, A.; Atwood, W.B.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.; Blandford, R. D.; Brandt, T. J.; hide

    2016-01-01

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission toward the Galactic center (GC) in high-energy gamma-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1-100 GeV from a 15 degrees x 15 degrees region about the direction of the GC. Specialized interstellar emission models (IEMs) are constructed to enable the separation of the gamma-ray emissions produced by cosmic ray particles interacting with the interstellar gas and radiation fields in the Milky Way into that from the inner 1 kpc surrounding the GC, and that from the rest of the Galaxy. A catalog of point sources for the 15 degrees x 15 degrees region is self-consistently constructed using these IEMs: the First Fermi-LAT Inner Galaxy Point SourceCatalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with gamma-ray point sources over the same region taken from existing catalogs. After subtracting the interstellar emission and point-source contributions a residual is found. If templates that peak toward the GC areused to model the positive residual the agreement with the data improves, but none of the additional templates tried account for all of its spatial structure. The spectrum of the positive residual modeled with these templates has a strong dependence on the choice of IEM.

  17. Phase 3 experiments of the JAERI/USDOE collaborative program on fusion blanket neutronics. Volume 1: Experiment

    NASA Astrophysics Data System (ADS)

    Oyama, Yukio; Konno, Chikara; Ikeda, Yujiro; Maekawa, Fujio; Kosako, Kazuaki; Nakamura, Tomoo; Maekawa, Hiroshi; Youssef, Mahmoud Z.; Kumar, Anil; Abdou, Mohamed A.

    1994-02-01

    A pseudo-line source has been realized by using an accelerator based D-T point neutron source. The pseudo-line source is obtained by time averaging of continuously moving point source or by superposition of finely distributed point sources. The line source is utilized for fusion blanket neutronics experiments with an annular geometry so as to simulate a part of a tokamak reactor. The source neutron characteristics were measured for two operational modes for the line source, continuous and step-wide modes, with the activation foil and the NE213 detectors, respectively. In order to give a source condition for a successive calculational analysis on the annular blanket experiment, the neutron source characteristics was calculated by a Monte Carlo code. The reliability of the Monte Carlo calculation was confirmed by comparison with the measured source characteristics. The shape of the annular blanket system was a rectangular with an inner cavity. The annular blanket was consist of 15 mm-thick first wall (SS304) and 406 mm-thick breeder zone with Li2O at inside and Li2CO3 at outside. The line source was produced at the center of the inner cavity by moving the annular blanket system in the span of 2 m. Three annular blanket configurations were examined; the reference blanket, the blanket covered with 25 mm thick graphite armor and the armor-blanket with a large opening. The neutronics parameters of tritium production rate, neutron spectrum and activation reaction rate were measured with specially developed techniques such as multi-detector data acquisition system, spectrum weighting function method and ramp controlled high voltage system. The present experiment provides unique data for a higher step of benchmark to test a reliability of neutronics design calculation for a realistic tokamak reactor.

  18. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  19. Dust Storm over the Middle East: Retrieval Approach, Source Identification, and Trend Analysis

    NASA Astrophysics Data System (ADS)

    Moridnejad, A.; Karimi, N.; Ariya, P. A.

    2014-12-01

    The Middle East region has been considered to be responsible for approximately 25% of the Earth's global emissions of dust particles. By developing Middle East Dust Index (MEDI) and applying to 70 dust storms characterized on MODIS images and occurred during the period between 2001 and 2012, we herein present a new high resolution mapping of major atmospheric dust source points participating in this region. To assist environmental managers and decision maker in taking proper and prioritized measures, we then categorize identified sources in terms of intensity based on extracted indices for Deep Blue algorithm and also utilize frequency of occurrence approach to find the sensitive sources. In next step, by implementing the spectral mixture analysis on the Landsat TM images (1984 and 2012), a novel desertification map will be presented. The aim is to understand how human perturbations and land-use change have influenced the dust storm points in the region. Preliminary results of this study indicate for the first time that c.a., 39 % of all detected source points are located in this newly anthropogenically desertified area. A large number of low frequency sources are located within or close to the newly desertified areas. These severely desertified regions require immediate concern at a global scale. During next 6 months, further research will be performed to confirm these preliminary results.

  20. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  1. Utilization of lunar materials and expertise for large scale operations in space: Abstracts. [lunar bases and space industrialization

    NASA Technical Reports Server (NTRS)

    Criswell, D. R. (Editor)

    1976-01-01

    The practicality of exploiting the moon, not only as a source of materials for large habitable structures at Lagrangian points, but also as a base for colonization is discussed in abstracts of papers presented at a special session on lunar utilization. Questions and answers which followed each presentation are included after the appropriate abstract. Author and subject indexes are provided.

  2. High frequency seismic signal generated by landslides on complex topographies: from point source to spatially distributed sources

    NASA Astrophysics Data System (ADS)

    Mangeney, A.; Kuehnert, J.; Capdeville, Y.; Durand, V.; Stutzmann, E.; Kone, E. H.; Sethi, S.

    2017-12-01

    During their flow along the topography, landslides generate seismic waves in a wide frequency range. These so called landquakes can be recorded at very large distances (a few hundreds of km for large landslides). The recorded signals depend on the landslide seismic source and the seismic wave propagation. If the wave propagation is well understood, the seismic signals can be inverted for the seismic source and thus can be used to get information on the landslide properties and dynamics. Analysis and modeling of long period seismic signals (10-150s) have helped in this way to discriminate between different landslide scenarios and to constrain rheological parameters (e.g. Favreau et al., 2010). This was possible as topography poorly affects wave propagation at these long periods and the landslide seismic source can be approximated as a point source. In the near-field and at higher frequencies (> 1 Hz) the spatial extent of the source has to be taken into account and the influence of the topography on the recorded seismic signal should be quantified in order to extract information on the landslide properties and dynamics. The characteristic signature of distributed sources and varying topographies is studied as a function of frequency and recording distance.The time dependent spatial distribution of the forces applied to the ground by the landslide are obtained using granular flow numerical modeling on 3D topography. The generated seismic waves are simulated using the spectral element method. The simulated seismic signal is compared to observed seismic data from rockfalls at the Dolomieu Crater of Piton de la Fournaise (La Réunion).Favreau, P., Mangeney, A., Lucas, A., Crosta, G., and Bouchut, F. (2010). Numerical modeling of landquakes. Geophysical Research Letters, 37(15):1-5.

  3. Improved bioluminescence and fluorescence reconstruction algorithms using diffuse optical tomography, normalized data, and optimized selection of the permissible source region

    PubMed Central

    Naser, Mohamed A.; Patterson, Michael S.

    2011-01-01

    Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647

  4. Integrating Low-Cost Mems Accelerometer Mini-Arrays (mama) in Earthquake Early Warning Systems

    NASA Astrophysics Data System (ADS)

    Nof, R. N.; Chung, A. I.; Rademacher, H.; Allen, R. M.

    2016-12-01

    Current operational Earthquake Early Warning Systems (EEWS) acquire data with networks of single seismic stations, and compute source parameters assuming earthquakes to be point sources. For large events, the point-source assumption leads to an underestimation of magnitude, and the use of single stations leads to large uncertainties in the locations of events outside the network. We propose the use of mini-arrays to improve EEWS. Mini-arrays have the potential to: (a) estimate reliable hypocentral locations by beam forming (FK-analysis) techniques; (b) characterize the rupture dimensions and account for finite-source effects, leading to more reliable estimates for large magnitudes. Previously, the high price of multiple seismometers has made creating arrays cost-prohibitive. However, we propose setting up mini-arrays of a new seismometer based on low-cost (<$150), high-performance MEMS accelerometer around conventional seismic stations. The expected benefits of such an approach include decreasing alert-times, improving real-time shaking predictions and mitigating false alarms. We use low-resolution 14-bit Quake Catcher Network (QCN) data collected during Rapid Aftershock Mobilization Program (RAMP) in Christchurch, NZ following the M7.1 Darfield earthquake in September 2010. As the QCN network was so dense, we were able to use small sub-array of up to ten sensors spread along a maximum area of 1.7x2.2 km2 to demonstrate our approach and to solve for the BAZ of two events (Mw4.7 and Mw5.1) with less than ±10° error. We will also present the new 24-bit device details, benchmarks, and real-time measurements.

  5. ON THE 2012 OCTOBER 23 CIRCULAR RIBBON FLARE: EMISSION FEATURES AND MAGNETIC TOPOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Kai; Guo, Yang; Ding, M. D., E-mail: guoyang@nju.edu.cn, E-mail: dmd@nju.edu.cn

    2015-06-20

    Circular ribbon flares are usually related to spine-fan type magnetic topology containing null points. In this paper, we investigate an X-class circular ribbon flare on 2012 October 23, using the multiwavelength data from the Solar Dynamics Observatory, Hinode, and RHESSI. In Ca ii H emission, the flare showed three ribbons with two highly elongated ones inside and outside a quasi-circular one, respectively. A hot channel was displayed in the extreme-ultraviolet emissions that infers the existence of a magnetic flux rope. Two hard X-ray (HXR) sources in the 12–25 keV energy band were located at the footpoints of this hot channel. Using amore » nonlinear force-free magnetic field extrapolation, we identify three topological structures: (1) a three-dimensional null point, (2) a flux rope below the fan of the null point, and (3) a large-scale quasi-separatrix layer (QSL) induced by the quadrupolar-like magnetic field of the active region. We find that the null point is embedded within the large-scale QSL. In our case, all three identified topological structures must be considered to explain all the emission features associated with the observed flare. Besides, the HXR sources are regarded as the consequence of the reconnection within or near the border of the flux rope.« less

  6. GLAST and Ground-Based Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    McEnery, Julie

    2008-01-01

    The launch of the Gamma-ray Large Area Space Telescope together with the advent of a new generation of ground-based gamma-ray detectors such as VERITAS, HESS, MAGIC and CANGAROO, will usher in a new era of high-energy gamma-ray astrophysics. GLAST and the ground based gamma-ray observatories will provide highly complementary capabilities for spectral, temporal and spatial studies of high energy gamma-ray sources. Joint observations will cover a huge energy range, from 20 MeV to over 20 TeV. The LAT will survey the entire sky every three hours, allowing it both to perform uniform, long-term monitoring of variable sources and to detect flaring sources promptly. Both functions complement the high-sensitivity pointed observations provided by ground-based detectors. Finally, the large field of view of GLAST will allow a study of gamma-ray emission on large angular scales and identify interesting regions of the sky for deeper studies at higher energies. In this poster, we will discuss the science returns that might result from joint GLAST/ground-based gamma-ray observations and illustrate them with detailed source simulations.

  7. From the volcano effect to banding: a minimal model for bacterial behavioral transitions near chemoattractant sources.

    PubMed

    Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve

    2018-04-30

    Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a 'volcano effect' (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium's temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium's chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.

  8. Tapering the sky response for angular power spectrum estimation from low-frequency radio-interferometric data.

    PubMed

    Choudhuri, Samir; Bharadwaj, Somnath; Roy, Nirupam; Ghosh, Abhik; Ali, Sk Saiyad

    2016-06-11

    It is important to correctly subtract point sources from radio-interferometric data in order to measure the power spectrum of diffuse radiation like the Galactic synchrotron or the Epoch of Reionization 21-cm signal. It is computationally very expensive and challenging to image a very large area and accurately subtract all the point sources from the image. The problem is particularly severe at the sidelobes and the outer parts of the main lobe where the antenna response is highly frequency dependent and the calibration also differs from that of the phase centre. Here, we show that it is possible to overcome this problem by tapering the sky response. Using simulated 150 MHz observations, we demonstrate that it is possible to suppress the contribution due to point sources from the outer parts by using the Tapered Gridded Estimator to measure the angular power spectrum C ℓ of the sky signal. We also show from the simulation that this method can self-consistently compute the noise bias and accurately subtract it to provide an unbiased estimation of C ℓ .

  9. MANAGEMENT OF DIFFUSE POLLUTION IN AGRICULTURAL WATERSHEDS: LESSONS FROM THE MINNESOTA RIVER BASIN. (R825290)

    EPA Science Inventory

    Abstract

    The Minnesota River (Minnesota, USA) receives large non-point source pollutant loads. Complex interactions between agricultural, state agency, environmental groups, and issues of scale make watershed management difficult. Subdividing the basin's 12 major water...

  10. ESTIMATING DIFFUSE STORMWATER NUTRIENT LOADS FROM SUBURBAN LANDSCAPES TO THE NAVESINK ESTUARY, NEW JERSEY

    EPA Science Inventory

    Hitherto, stormwater runoff from suburban land-uses has been largely unregulated and designated as a non-point source. Phase II of the Clean Water Act will require permits under the National Pollutant Discharge Elimination System for stormwater discharges from municipal separate ...

  11. ESTIMATING DIFFUSE STORMWATER NUTRIENT LOADS FROM SUBURBAN LANDSCAPES IN THE NAVESINK ESTUARY, NEW JERSEY

    EPA Science Inventory

    Hitherto, stormwater runoff from suburban land-uses has been largely unregulated and designated as a non-point source. Phase II of the Clean Water Act now requires permits under the National Pollutant Discharge Elimination System for stormwater discharges from municipal separate...

  12. Resolving the Extragalactic γ-Ray Background above 50 GeV with the Fermi Large Area Telescope.

    PubMed

    Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Bissaldi, E; Blandford, R D; Bloom, E D; Bonino, R; Bregeon, J; Britto, R J; Bruel, P; Buehler, R; Caliandro, G A; Cameron, R A; Caragiulo, M; Caraveo, P A; Cavazzuti, E; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Chiaro, G; Ciprini, S; Cohen-Tanugi, J; Cominsky, L R; Costanza, F; Cutini, S; D'Ammando, F; de Angelis, A; de Palma, F; Desiante, R; Digel, S W; Di Mauro, M; Di Venere, L; Domínguez, A; Drell, P S; Favuzzi, C; Fegan, S J; Ferrara, E C; Franckowiak, A; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Giglietto, N; Giommi, P; Giordano, F; Giroletti, M; Godfrey, G; Green, D; Grenier, I A; Guiriec, S; Hays, E; Horan, D; Iafrate, G; Jogler, T; Jóhannesson, G; Kuss, M; La Mura, G; Larsson, S; Latronico, L; Li, J; Li, L; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Magill, J; Maldera, S; Manfreda, A; Mayer, M; Mazziotta, M N; Michelson, P F; Mitthumsiri, W; Mizuno, T; Moiseev, A A; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Negro, M; Nuss, E; Ohsugi, T; Okada, C; Omodei, N; Orlando, E; Ormes, J F; Paneque, D; Perkins, J S; Pesce-Rollins, M; Petrosian, V; Piron, F; Pivato, G; Porter, T A; Rainò, S; Rando, R; Razzano, M; Razzaque, S; Reimer, A; Reimer, O; Reposeur, T; Romani, R W; Sánchez-Conde, M; Schmid, J; Schulz, A; Sgrò, C; Simone, D; Siskind, E J; Spada, F; Spandre, G; Spinelli, P; Suson, D J; Takahashi, H; Thayer, J B; Tibaldo, L; Torres, D F; Troja, E; Vianello, G; Yassine, M; Zimmer, S

    2016-04-15

    The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E>50  GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (∼8×10^{-12}  ph cm^{-2} s^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S_{b}, in the range [8×10^{-12},1.5×10^{-11}]  ph cm^{-2} s^{-1} and power-law indices below and above the break of α_{2}∈[1.60,1.75] and α_{1}=2.49±0.12, respectively. Integration of dN/dS shows that point sources account for at least 86_{-14}^{+16}% of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.

  13. Resolving the Extragalactic γ -Ray Background above 50 GeV with the Fermi Large Area Telescope

    DOE PAGES

    Ackermann, M.; Ajello, M.; Albert, A.; ...

    2016-04-14

    The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. In this paper, using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dS, of extragalactic γ-ray sources at E > 50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (~8 x 10 -12 ph cm -2s -1). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dS below the source detection threshold. Overall, the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, S b, in the range [8 x 10 -12, 1.5 x 10 -11] ph cm -2s -1 and power-law indices below and above the break of α 2 ϵ [1.60, 1.75] and α 1 = 2.49 ± 0.12, respectively. Integration of dN/dS shows that point sources account for at least 86more » $$+16\\atop{-14}$$ % of the total extragalactic γ-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e., blazars) dominating the source counts to the minimum flux explored by this analysis. Finally, we estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.« less

  14. Numerical simulation of seismic wave propagation from land-excited large volume air-gun source

    NASA Astrophysics Data System (ADS)

    Cao, W.; Zhang, W.

    2017-12-01

    The land-excited large volume air-gun source can be used to study regional underground structures and to detect temporal velocity changes. The air-gun source is characterized by rich low frequency energy (from bubble oscillation, 2-8Hz) and high repeatability. It can be excited in rivers, reservoirs or man-made pool. Numerical simulation of the seismic wave propagation from the air-gun source helps to understand the energy partitioning and characteristics of the waveform records at stations. However, the effective energy recorded at a distance station is from the process of bubble oscillation, which can not be approximated by a single point source. We propose a method to simulate the seismic wave propagation from the land-excited large volume air-gun source by finite difference method. The process can be divided into three parts: bubble oscillation and source coupling, solid-fluid coupling and the propagation in the solid medium. For the first part, the wavelet of the bubble oscillation can be simulated by bubble model. We use wave injection method combining the bubble wavelet with elastic wave equation to achieve the source coupling. Then, the solid-fluid boundary condition is implemented along the water bottom. And the last part is the seismic wave propagation in the solid medium, which can be readily implemented by the finite difference method. Our method can get accuracy waveform of land-excited large volume air-gun source. Based on the above forward modeling technology, we analysis the effect of the excited P wave and the energy of converted S wave due to different water shapes. We study two land-excited large volume air-gun fields, one is Binchuan in Yunnan, and the other is Hutubi in Xinjiang. The station in Binchuan, Yunnan is located in a large irregular reservoir, the waveform records have a clear S wave. Nevertheless, the station in Hutubi, Xinjiang is located in a small man-made pool, the waveform records have very weak S wave. Better understanding of the characteristics of land-excited large volume air-gun can help to better use of the air-gun source.

  15. Energy Spectral Behaviors of Communication Networks of Open-Source Communities

    PubMed Central

    Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun

    2015-01-01

    Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331

  16. SIFT optimization and automation for matching images from multiple temporal sources

    NASA Astrophysics Data System (ADS)

    Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio

    2017-05-01

    Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.

  17. Optical design for CETUS: a wide-field 1.5m aperture UV payload being studied for a NASA probe class mission study

    NASA Astrophysics Data System (ADS)

    Woodruff, Robert A.; Hull, Tony; Heap, Sara R.; Danchi, William; Kendrick, Stephen E.; Purves, Lloyd

    2017-09-01

    We are developing a NASA Headquarters selected Probe-class mission concept called the Cosmic Evolution Through UV Spectroscopy (CETUS) mission, which includes a 1.5-m aperture diameter large field-of-view (FOV) telescope optimized for UV imaging, multi-object spectroscopy, and point-source spectroscopy. The optical system includes a Three Mirror Anastigmatic (TMA) telescope that simultaneously feeds three separate scientific instruments: the near-UV (NUV) Multi-Object Spectrograph (MOS) with a next-generation Micro-Shutter Array (MSA); the two-channel camera covering the far-UV (FUV) and NUV spectrum; and the point-source spectrograph covering the FUV and NUV region with selectable R 40,000 echelle modes and R 2,000 first order modes. The optical system includes fine guidance sensors, wavefront sensing, and spectral and flat-field in-flight calibration sources. This paper will describe the current optical design of CETUS.

  18. Optical design for CETUS: a wide-field 1.5m aperture UV payload being studied for a NASA probe class mission study

    NASA Astrophysics Data System (ADS)

    Woodruff, Robert; Robert Woodruff, Goddard Space Flight Center, Kendrick Optical Consulting

    2018-01-01

    We are developing a NASA Headquarters selected Probe-class mission concept called the Cosmic Evolution Through UV Spectroscopy (CETUS) mission, which includes a 1.5-m aperture diameter large field-of-view (FOV) telescope optimized for UV imaging, multi-object spectroscopy, and point-source spectroscopy. The optical system includes a Three Mirror Anastigmatic (TMA) telescope that simultaneously feeds three separate scientific instruments: the near-UV (NUV) Multi-Object Spectrograph (MOS) with a next-generation Micro-Shutter Array (MSA); the two-channel camera covering the far-UV (FUV) and NUV spectrum; and the point-source spectrograph covering the FUV and NUV region with selectable R~ 40,000 echelle modes and R~ 2,000 first order modes. The optical system includes fine guidance sensors, wavefront sensing, and spectral and flat-field in-flight calibration sources. This paper will describe the current optical design of CETUS.

  19. Estimating global and North American methane emissions with high spatial resolution using GOSAT satellite data

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.; Wecht, K. J.; Maasakkers, J. D.; Biraud, S. C.; Boesch, H.; Bowman, K. W.; Deutscher, N. M.; Dubey, M. K.; Griffith, D. W. T.; Hase, F.; Kuze, A.; Notholt, J.; Ohyama, H.; Parker, R.; Payne, V. H.; Sussmann, R.; Velazco, V. A.; Warneke, T.; Wennberg, P. O.; Wunch, D.

    2015-02-01

    We use 2009-2011 space-borne methane observations from the Greenhouse Gases Observing SATellite (GOSAT) to constrain global and North American inversions of methane emissions with 4° × 5° and up to 50 km × 50 km spatial resolution, respectively. The GOSAT data are first evaluated with atmospheric methane observations from surface networks (NOAA, TCCON) and aircraft (NOAA/DOE, HIPPO), using the GEOS-Chem chemical transport model as a platform to facilitate comparison of GOSAT with in situ data. This identifies a high-latitude bias between the GOSAT data and GEOS-Chem that we correct via quadratic regression. The surface and aircraft data are subsequently used for independent evaluation of the methane source inversions. Our global adjoint-based inversion yields a total methane source of 539 Tg a-1 and points to a large East Asian overestimate in the EDGARv4.2 inventory used as a prior. Results serve as dynamic boundary conditions for an analytical inversion of North American methane emissions using radial basis functions to achieve high resolution of large sources and provide full error characterization. We infer a US anthropogenic methane source of 40.2-42.7 Tg a-1, as compared to 24.9-27.0 Tg a-1 in the EDGAR and EPA bottom-up inventories, and 30.0-44.5 Tg a-1 in recent inverse studies. Our estimate is supported by independent surface and aircraft data and by previous inverse studies for California. We find that the emissions are highest in the South-Central US, the Central Valley of California, and Florida wetlands, large isolated point sources such as the US Four Corners also contribute. We attribute 29-44% of US anthropogenic methane emissions to livestock, 22-31% to oil/gas, 20% to landfills/waste water, and 11-15% to coal with an additional 9.0-10.1 Tg a-1 source from wetlands.

  20. A new catalogue of ultraluminous X-ray sources (and more!)

    NASA Astrophysics Data System (ADS)

    Roberts, T.; Earnshaw, H.; Walton, D.; Middleton, M.; Mateos, S.

    2017-10-01

    Many of the critical issues of ultraluminous X-ray source (ULX) science - for example the prevalence of IMBH and/or ULX pulsar candidates within the wider ULX population - can only be addressed by studying statistical samples of ULXs. Similarly, characterising the range of properties displayed by ULXs, and so understanding their accretion physics, requires large samples of objects. To this end, we introduce a new catalogue of 376 ultraluminous X-ray sources and 1092 less luminous point X-ray sources associated with nearby galaxies, derived from the 3XMM-DR4 catalogue. We highlight applications of this catalogue, for example the identification of new IMBH candidates from the most luminous ULXs; and examining the physics of objects at the Eddington threshold, where their luminosities of ˜ 10^{39} erg s^{-1} indicate their accretion rates are ˜ Eddington. We also show how the catalogue can be used to start to examine a wider range of lower luminosity (sub-ULX) point sources in star forming galaxies than previously accessible through spectral stacking, and argue why this is important for galaxy formation in the high redshift Universe.

  1. Gaia Data Release 1. Pre-processing and source list creation

    NASA Astrophysics Data System (ADS)

    Fabricius, C.; Bastian, U.; Portell, J.; Castañeda, J.; Davidson, M.; Hambly, N. C.; Clotet, M.; Biermann, M.; Mora, A.; Busonero, D.; Riva, A.; Brown, A. G. A.; Smart, R.; Lammers, U.; Torra, J.; Drimmel, R.; Gracia, G.; Löffler, W.; Spagna, A.; Lindegren, L.; Klioner, S.; Andrei, A.; Bach, N.; Bramante, L.; Brüsemeister, T.; Busso, G.; Carrasco, J. M.; Gai, M.; Garralda, N.; González-Vidal, J. J.; Guerra, R.; Hauser, M.; Jordan, S.; Jordi, C.; Lenhardt, H.; Mignard, F.; Messineo, R.; Mulone, A.; Serraller, I.; Stampa, U.; Tanga, P.; van Elteren, A.; van Reeven, W.; Voss, H.; Abbas, U.; Allasia, W.; Altmann, M.; Anton, S.; Barache, C.; Becciani, U.; Berthier, J.; Bianchi, L.; Bombrun, A.; Bouquillon, S.; Bourda, G.; Bucciarelli, B.; Butkevich, A.; Buzzi, R.; Cancelliere, R.; Carlucci, T.; Charlot, P.; Collins, R.; Comoretto, G.; Cross, N.; Crosta, M.; de Felice, F.; Fienga, A.; Figueras, F.; Fraile, E.; Geyer, R.; Hernandez, J.; Hobbs, D.; Hofmann, W.; Liao, S.; Licata, E.; Martino, M.; McMillan, P. J.; Michalik, D.; Morbidelli, R.; Parsons, P.; Pecoraro, M.; Ramos-Lerate, M.; Sarasso, M.; Siddiqui, H.; Steele, I.; Steidelmüller, H.; Taris, F.; Vecchiato, A.; Abreu, A.; Anglada, E.; Boudreault, S.; Cropper, M.; Holl, B.; Cheek, N.; Crowley, C.; Fleitas, J. M.; Hutton, A.; Osinde, J.; Rowell, N.; Salguero, E.; Utrilla, E.; Blagorodnova, N.; Soffel, M.; Osorio, J.; Vicente, D.; Cambras, J.; Bernstein, H.-H.

    2016-11-01

    Context. The first data release from the Gaia mission contains accurate positions and magnitudes for more than a billion sources, and proper motions and parallaxes for the majority of the 2.5 million Hipparcos and Tycho-2 stars. Aims: We describe three essential elements of the initial data treatment leading to this catalogue: the image analysis, the construction of a source list, and the near real-time monitoring of the payload health. We also discuss some weak points that set limitations for the attainable precision at the present stage of the mission. Methods: Image parameters for point sources are derived from one-dimensional scans, using a maximum likelihood method, under the assumption of a line spread function constant in time, and a complete modelling of bias and background. These conditions are, however, not completely fulfilled. The Gaia source list is built starting from a large ground-based catalogue, but even so a significant number of new entries have been added, and a large number have been removed. The autonomous onboard star image detection will pick up many spurious images, especially around bright sources, and such unwanted detections must be identified. Another key step of the source list creation consists in arranging the more than 1010 individual detections in spatially isolated groups that can be analysed individually. Results: Complete software systems have been built for the Gaia initial data treatment, that manage approximately 50 million focal plane transits daily, giving transit times and fluxes for 500 million individual CCD images to the astrometric and photometric processing chains. The software also carries out a successful and detailed daily monitoring of Gaia health.

  2. Teleseismic Body Wave Analysis for the 27 September 2003 Altai, Earthquake (Mw7.4) and Large Aftershocks

    NASA Astrophysics Data System (ADS)

    Gomez-Gonzalez, J. M.; Mellors, R.

    2007-05-01

    We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.

  3. Adaptive Cross-correlation Algorithm and Experiment of Extended Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Morgan, Rhonda M.; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    We have developed a new, adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels in two extended-scene images captured by a Shack-Hartmann wavefront sensor (SH-WFS). It determines the positions of all of the extended-scene image cells relative to a reference cell using an FFT-based iterative image shifting algorithm. It works with both point-source spot images as well as extended scene images. We have also set up a testbed for extended0scene SH-WFS, and tested the ACC algorithm with the measured data of both point-source and extended-scene images. In this paper we describe our algorithm and present out experimental results.

  4. Taking the Radio Blinders Off of M83: A Wide Spectrum Analysis of the Historical Point Source Population

    NASA Astrophysics Data System (ADS)

    Stockdale, Christopher; Keefe, Clayton; Nichols, Michael; Rujevcan, Colton; Blair, William P.; Cowan, John J.; Godfrey, Leith; Miller-Jones, James; Kuntz, K. D.; Long, Knox S.; Maddox, Larry A.; Plucinsky, Paul P.; Pritchard, Tyler A.; Soria, Roberto; Whitmore, Bradley C.; Winkler, P. Frank

    2015-01-01

    We present low frequency observations of the grand design spiral galaxy, M83, using the C and L bands of the Karl G. Jansky Very Large Array (VLA). With recent optical (HST) and X-ray (Chandra) observations and utilizing the newly expanded bandwidth of the VLA, we are exploring the radio spectral properties of the historical radio point sources in M83. These observations allow us to probe the evolution of supernova remnants (SNRs) and to find previously undiscovered SNRs. These observations represent the fourth epoch of deep VLA observations of M83. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities.

  5. Six New Millisecond Pulsars From Arecibo Searches Of Fermi Gamma-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cromartie, H. T.; Camilo, F.; Kerr, M.

    2016-02-25

    We have discovered six radio millisecond pulsars (MSPs) in a search with the Arecibo telescope of 34 unidentified gamma-ray sources from the Fermi Large Area Telescope (LAT) 4-year point source catalog. Among the 34 sources, we also detected two MSPs previously discovered elsewhere. Each source was observed at a center frequency of 327 MHz, typically at three epochs with individual integration times of 15 minutes. The new MSP spin periods range from 1.99 to 4.66 ms. Five of the six pulsars are in interacting compact binaries (period ≤ 8.1 hr), while the sixth is a more typical neutron star-white dwarfmore » binary with an 83-day orbital period. This is a higher proportion of interacting binaries than for equivalent Fermi-LAT searches elsewhere. The reason is that Arecibo’s large gain afforded us the opportunity to limit integration times to 15 minutes, which significantly increased our sensitivity to these highly accelerated systems. Seventeen of the remaining 26 gamma-ray sources are still categorized as strong MSP candidates, and will be re-searched.« less

  6. SIX NEW MILLISECOND PULSARS FROM ARECIBO SEARCHES OF FERMI GAMMA-RAY SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cromartie, H. T.; Camilo, F.; Kerr, M.

    2016-03-01

    We have discovered six radio millisecond pulsars (MSPs) in a search with the Arecibo telescope of 34 unidentified gamma-ray sources from the Fermi Large Area Telescope (LAT) four year point source catalog. Among the 34 sources, we also detected two MSPs previously discovered elsewhere. Each source was observed at a center frequency of 327 MHz, typically at three epochs with individual integration times of 15 minutes. The new MSP spin periods range from 1.99 to 4.66 ms. Five of the six pulsars are in interacting compact binaries (period ≤ 8.1 hr), while the sixth is a more typical neutron star-whitemore » dwarf binary with an 83 day orbital period. This is a higher proportion of interacting binaries than for equivalent Fermi-LAT searches elsewhere. The reason is that Arecibo's large gain afforded us the opportunity to limit integration times to 15 minutes, which significantly increased our sensitivity to these highly accelerated systems. Seventeen of the remaining 26 gamma-ray sources are still categorized as strong MSP candidates, and will be re-searched.« less

  7. Reducing Phosphorus Runoff from Biosolids with Water Treatment Residuals

    USDA-ARS?s Scientific Manuscript database

    A large fraction of the biosolids produced in the U.S. are placed in landfills or incinerated to avoid potential water quality problems associated with non-point source phosphorus (P) runoff. The objective of this study was to determine the effect of various chemical amendments on P runoff from bi...

  8. A superconducting large-angle magnetic suspension

    NASA Technical Reports Server (NTRS)

    Downer, James R.; Anastas, George V., Jr.; Bushko, Dariusz A.; Flynn, Frederick J.; Goldie, James H.; Gondhalekar, Vijay; Hawkey, Timothy J.; Hockney, Richard L.; Torti, Richard P.

    1992-01-01

    SatCon Technology Corporation has completed a Small Business Innovation Research (SBIR) Phase 2 program to develop a Superconducting Large-Angle Magnetic Suspension (LAMS) for the NASA Langley Research Center. The Superconducting LAMS was a hardware demonstration of the control technology required to develop an advanced momentum exchange effector. The Phase 2 research was directed toward the demonstration for the key technology required for the advanced concept CMG, the controller. The Phase 2 hardware consists of a superconducting solenoid ('source coils') suspended within an array of nonsuperconducting coils ('control coils'), a five-degree-of-freedom positioning sensing system, switching power amplifiers, and a digital control system. The results demonstrated the feasibility of suspending the source coil. Gimballing (pointing the axis of the source coil) was demonstrated over a limited range. With further development of the rotation sensing system, enhanced angular freedom should be possible.

  9. A superconducting large-angle magnetic suspension

    NASA Astrophysics Data System (ADS)

    Downer, James R.; Anastas, George V., Jr.; Bushko, Dariusz A.; Flynn, Frederick J.; Goldie, James H.; Gondhalekar, Vijay; Hawkey, Timothy J.; Hockney, Richard L.; Torti, Richard P.

    1992-12-01

    SatCon Technology Corporation has completed a Small Business Innovation Research (SBIR) Phase 2 program to develop a Superconducting Large-Angle Magnetic Suspension (LAMS) for the NASA Langley Research Center. The Superconducting LAMS was a hardware demonstration of the control technology required to develop an advanced momentum exchange effector. The Phase 2 research was directed toward the demonstration for the key technology required for the advanced concept CMG, the controller. The Phase 2 hardware consists of a superconducting solenoid ('source coils') suspended within an array of nonsuperconducting coils ('control coils'), a five-degree-of-freedom positioning sensing system, switching power amplifiers, and a digital control system. The results demonstrated the feasibility of suspending the source coil. Gimballing (pointing the axis of the source coil) was demonstrated over a limited range. With further development of the rotation sensing system, enhanced angular freedom should be possible.

  10. The Chandra Source Catalog : Automated Source Correlation

    NASA Astrophysics Data System (ADS)

    Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).

  11. Density and fluence dependence of lithium cell damage and recovery characteristics

    NASA Technical Reports Server (NTRS)

    Faith, T. J.

    1971-01-01

    Experimental results on lithium-containing solar cells point toward the lithium donor density gradient dN sub L/dw as being the crucial parameter in the prediction of cell behavior after irradiation by electrons. Recovery measurements on a large number of oxygen-rich and oxygen-lean lithium cells have confirmed that cell recovery speed is directly proportional to the value of the lithium gradient for electron fluences. Gradient measurements have also been correlated with lithium diffusion schedules. Results have shown that long diffusion times (25 h) with a paint-on source result in large cell-to-cell variations in gradient, probably due to a loss of the lithium source with time.

  12. Power-output regularization in global sound equalization.

    PubMed

    Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn

    2008-01-01

    The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.

  13. THE CHANDRA COSMOS SURVEY. I. OVERVIEW AND POINT SOURCE CATALOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elvis, Martin; Civano, Francesca; Aldcroft, T. L.

    2009-09-01

    The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that has imaged the central 0.5 deg{sup 2} of the COSMOS field (centered at 10 {sup h}, +02 deg.) with an effective exposure of {approx}160 ks, and an outer 0.4 deg{sup 2} area with an effective exposure of {approx}80 ks. The limiting source detection depths are 1.9 x 10{sup -16} erg cm{sup -2} s{sup -1} in the soft (0.5-2 keV) band, 7.3 x 10{sup -16} erg cm{sup -2} s{sup -1} in the hard (2-10 keV) band, and 5.7 x 10{sup -16} erg cm{sup -2} s{sup -1} in themore » full (0.5-10 keV) band. Here we describe the strategy, design, and execution of the C-COSMOS survey, and present the catalog of 1761 point sources detected at a probability of being spurious of <2 x 10{sup -5} (1655 in the full, 1340 in the soft, and 1017 in the hard bands). By using a grid of 36 heavily ({approx}50%) overlapping pointing positions with the ACIS-I imager, a remarkably uniform ({+-}12%) exposure across the inner 0.5 deg{sup 2} field was obtained, leading to a sharply defined lower flux limit. The widely different point-spread functions obtained in each exposure at each point in the field required a novel source detection method, because of the overlapping tiling strategy, which is described in a companion paper. This method produced reliable sources down to a 7-12 counts, as verified by the resulting logN-logS curve, with subarcsecond positions, enabling optical and infrared identifications of virtually all sources, as reported in a second companion paper. The full catalog is described here in detail and is available online.« less

  14. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite rupture models. 1d-layered medium models are implemented for both near- and far-field data predictions. A highlight of our approach is a weak dependence on earthquake bulletin information: hypocenter locations and source origin times are relatively free source model parameters. We present this harmonized source modelling environment based on example earthquake studies, e.g. the 2010 Haiti earthquake, the 2009 L'Aquila earthquake and others. We discuss the benefit of combined-data non-linear modelling on the resolution of first-order rupture parameters, e.g. location, size, orientation, mechanism, moment/slip and rupture propagation. The presented studies apply our newly developed software tools which build up on the open-source seismological software toolbox pyrocko (www.pyrocko.org) in the form of modules. We aim to facilitate a better exploitation of open global data sets for a wide community studying tectonics, but the tools are applicable also for a large range of regional to local earthquake studies. Our developments therefore ensure a large flexibility in the parametrization of medium models (e.g. 1d to 3d medium models), source models (e.g. explosion sources, full moment tensor sources, heterogeneous slip models, etc) and of the predicted data (e.g. (high-rate) GPS, strong motion, tilt). This work is conducted within the project "Bridging Geodesy and Seismology" (www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  15. NOx Emissions from Large Point Sources: Variability in Ozone Production, Resulting Health Damages and Economic Costs

    NASA Astrophysics Data System (ADS)

    Mauzerall, D. L.; Sultan, B.; Kim, N.; Bradford, D.

    2004-12-01

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O3) produced from nitrogen oxides (NOx = NO + NO2) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air Quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NOx emitted from individual sources can have on the downwind concentration of surface O3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting ozone-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used "cap and trade" approach to NOx regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NOx emissions from one place or time to another could result in large changes in the health effects due to ozone formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NOx emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.

  16. Ground Motion Simulation for a Large Active Fault System using Empirical Green's Function Method and the Strong Motion Prediction Recipe - a Case Study of the Noubi Fault Zone -

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Kumamoto, T.; Fujita, M.

    2005-12-01

    The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.

  17. Fermi-LAT Observations of High-Energy Gamma-Ray Emission Toward the Galactic Center

    DOE PAGES

    Ajello, M.

    2016-02-26

    The Fermi Large Area Telescope (LAT) has provided the most detailed view to date of the emission towards the Galactic centre (GC) in high-energy γ-rays. This paper describes the analysis of data taken during the first 62 months of the mission in the energy range 1 - 100 GeV from a 15° X15° region about the direction of the GC, and implications for the interstellar emissions produced by cosmic ray (CR) particles interacting with the gas and radiation fields in the inner Galaxy and for the point sources detected. Specialised interstellar emission models (IEMs) are constructed that enable separation ofmore » the γ-ray emission from the inner ~ 1 kpc about the GC from the fore- and background emission from the Galaxy. Based on these models, the interstellar emission from CR electrons interacting with the interstellar radiation field via the inverse Compton (IC) process and CR nuclei inelastically scattering off the gas producing γ-rays via π⁰ decays from the inner ~ 1 kpc is determined. The IC contribution is found to be dominant in the region and strongly enhanced compared to previous studies. A catalog of point sources for the 15 °X 15 °region is self-consistently constructed using these IEMs: the First Fermi–LAT Inner Galaxy point source Catalog (1FIG). The spatial locations, fluxes, and spectral properties of the 1FIG sources are presented, and compared with γ-ray point sources over the same region taken from existing catalogs, including the Third Fermi–LAT Source Catalog (3FGL). In general, the spatial density of 1FIG sources differs from those in the 3FGL, which is attributed to the different treatments of the interstellar emission and energy ranges used by the respective analyses. Three 1FIG sources are found to spatially overlap with supernova remnants (SNRs) listed in Green’s SNR catalog; these SNRs have not previously been associated with high-energy γ-ray sources. Most 3FGL sources with known multi-wavelength counterparts are also found. However, the majority of 1FIG point sources are unassociated. After subtracting the interstellar emission and point-source contributions from the data a residual is found that is a sub-dominant fraction of the total flux. But, it is brighter than the γ-ray emission associated with interstellar gas in the inner ~ 1 kpc derived for the IEMs used in this paper, and comparable to the integrated brightness of the point sources in the region for energies & 3 GeV. If spatial templates that peak toward the GC are used to model the positive residual and included in the total model for the 1515°X° region, the agreement with the data improves, but they do not account for all the residual structure. The spectrum of the positive residual modelled with these templates has a strong dependence on the choice of IEM.« less

  18. An Enhanced Method for Scheduling Observations of Large Sky Error Regions for Finding Optical Counterparts to Transients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rana, Javed; Singhal, Akshat; Gadre, Bhooshan

    2017-04-01

    The discovery and subsequent study of optical counterparts to transient sources is crucial for their complete astrophysical understanding. Various gamma-ray burst (GRB) detectors, and more notably the ground-based gravitational wave detectors, typically have large uncertainties in the sky positions of detected sources. Searching these large sky regions spanning hundreds of square degrees is a formidable challenge for most ground-based optical telescopes, which can usually image less than tens of square degrees of the sky in a single night. We present algorithms for better scheduling of such follow-up observations in order to maximize the probability of imaging the optical counterpart, basedmore » on the all-sky probability distribution of the source position. We incorporate realistic observing constraints such as the diurnal cycle, telescope pointing limitations, available observing time, and the rising/setting of the target at the observatory’s location. We use simulations to demonstrate that our proposed algorithms outperform the default greedy observing schedule used by many observatories. Our algorithms are applicable for follow-up of other transient sources with large positional uncertainties, such as Fermi -detected GRBs, and can easily be adapted for scheduling radio or space-based X-ray follow-up.« less

  19. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    NASA Technical Reports Server (NTRS)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  20. A Search to Uncover the Infrared Excess (IRXS) Sources in the Spitzer Enhanced Imaging Products (SEIP) Catalog

    NASA Astrophysics Data System (ADS)

    Rowe, Jamie Lynn; Duranko, Gary; Gorjian, Varoujan; Lineberger, Howard; Orr, Laura; Adewole, Ayomikun; Bradford, Eric; Douglas, Alea; Kohl, Steven; Larson, Lillia; Lascola, Gus; Orr, Quinton; Scott, Mekai; Walston, Joseph; Wang, Xian

    2018-01-01

    The Spitzer Enhanced Imaging Products catalog (SEIP) is a collection of nearly 42 million point sources obtained by the Spitzer Space Telescope during its 5+ year cryogenic mission. Strasburger et al (2014) isolated sources with a signal-to-noise ratio (SNR) >10 in five infrared (IR) wavelength channels (3.6, 4.5, 5.8, 8 and 24 microns) to begin a search for sources with infrared excess (IRXS). They found 76 objects that were never catalogued before. Based on this success, we intend to dig deeper into the catalog in an attempt to find more IRXS sources, specifically by lowering the SNR on the 3.6, 4.5, and 24 micron channels. The ultimate goal is to use this large sample to seek rare astrophysical sources that are transitional in nature and evolutionarily very important.Our filtering of the database at SNR > 5 yielded 461,000 sources. This was further evaluated and reduced to only the most interesting based on source location on a [3.6]-[4.5] vs [4.5]-[24] color-color diagram. We chose a sample of 985 extreme IRXS sources for further inspection. All of these candidate sources were visually inspected and cross-referenced against known sources in existing databases, resulting in a list of highly reliable IRXS sources.These sources will prove important in the study of galaxy and stellar evolution, and will serve as a starting point for further investigation.

  1. VizieR Online Data Catalog: GUViCS. Ultraviolet Source Catalogs (Voyer+, 2014)

    NASA Astrophysics Data System (ADS)

    Voyer, E. N.; Boselli, A.; Boissier, S.; Heinis, S.; Cortese, L.; Ferrarese, L.; Cote, P.; Cuillandre, J.-C.; Gwyn, S. D. J.; Peng, E. W.; Zhang, H.; Liu, C.

    2014-07-01

    These catalogs are based on GALEX NUV and FUV source detections in and behind the Virgo Cluster. The detections are split into catalogs of extended sources and point-like sources. The UV Virgo Cluster Extended Source catalog (UV_VES.fit) provides the deepest and most extensive UV photometric data of extended galaxies in Virgo to date. If certain data is not available for a given source then a null value is entered (e.g. -999, -99). UV point-like sources are matched with SDSS, NGVS, and NED and the relevant photometry and further data from these databases/catalogs are provided in this compilation of catalogs. The primary GUViCS UV Virgo Cluster Point-Like Source catalog is UV_VPS.fit. This catalog provides the most useful GALEX pipeline NUV and FUV photometric parameters, and categorizes sources as stars, Virgo members, and background sources, when possible. It also provides identifiers for optical matches in the SDSS and NED, and indicates if a match exists in the NGVS, only if GUViCS-optical matches are one-to-one. NED spectroscopic redshifts are also listed for GUViCS-NED one-to-one matches. If certain data is not available for a given source a null value is entered. Additionally, the catalog is useful for quick access to optical data on one-to-one GUViCS-SDSS matches.The only parameter available in the catalog for UV sources that have multiple SDSS matches is the total number of multiple matches, i.e. SDSSNUMMTCHS. Multiple GUViCS sources matched to the same SDSS source are also flagged given a total number of matches, SDSSNUMMTCHS, of one. All other fields for multiple matches are set to a null value of -99. In order to obtain full optical SDSS data for multiply matched UV sources in both scenarios, the user can cross-correlate the GUViCS ID of the sources of interest with the full GUViCS-SDSS matched catalog in GUV_SDSS.fit. The GUViCS-SDSS matched catalog, GUV_SDSS.fit, provides the most relevant SDSS data on all GUViCS-SDSS matches, including one-to-one matches and multiply matched sources. The catalog gives full SDSS identification information, complete SDSS photometric measurements in multiple aperture types, and complete redshift information (photometric and spectroscopic). It is ideal for large statistical studies of galaxy populations at multiple wavelengths in the background of the Virgo Cluster. The catalog can also be used as a starting point to study and search for previously unknown UV-bright point-like objects within the Virgo Cluster. If certain data is not available for a given source that field is given a null value. (6 data files).

  2. An improved instrument for the in vivo detection of lead in bone.

    PubMed Central

    Gordon, C L; Chettle, D R; Webber, C E

    1993-01-01

    An improved instrument for the fluorescence excitation measurement of concentrations of lead in bone has been developed. This is based on a large area high purity germanium detector and a point source of 109Cd. The source is positioned in a tungsten shield at the centre of the detector face such that 88keV photons cannot enter the detector directly. In vivo measurements are calibrated with plaster of Paris phantoms. Occupationally non-exposed men show a minimum detectable concentration of about 6 micrograms/g bone mineral. Measurements of tibia lead concentrations in 30 non-occupationally exposed men between the ages of 23 and 73 showed an annual increment of 0.46 microgram/g bone mineral/year. The mean deviation from the regression of tibia lead upon age was 3.5 micrograms/g bone mineral. Tibia lead concentration in one subject with a history of exposure to lead was 69.6 (SD 3.5) micrograms/g bone mineral. The improved precision of the point source large detector system means that greater confidence can be placed on the results of in vivo measurements of lead concentration. This will allow studies of the natural history of non-occupational lead accumulation in normal subjects and should permit investigations of the efficacy of therapeutic interventions in subjects poisoned with lead. PMID:8343425

  3. Study on Huizhou architecture of point cloud registration based on optimized ICP algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Runmei; Wu, Yulu; Zhang, Guangbin; Zhou, Wei; Tao, Yuqian

    2018-03-01

    In view of the current point cloud registration software has high hardware requirements, heavy workload and moltiple interactive definition, the source of software with better processing effect is not open, a two--step registration method based on normal vector distribution feature and coarse feature based iterative closest point (ICP) algorithm is proposed in this paper. This method combines fast point feature histogram (FPFH) algorithm, define the adjacency region of point cloud and the calculation model of the distribution of normal vectors, setting up the local coordinate system for each key point, and obtaining the transformation matrix to finish rough registration, the rough registration results of two stations are accurately registered by using the ICP algorithm. Experimental results show that, compared with the traditional ICP algorithm, the method used in this paper has obvious time and precision advantages for large amount of point clouds.

  4. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  5. Theory of two-point correlations of jet noise

    NASA Technical Reports Server (NTRS)

    Ribner, H. S.

    1976-01-01

    A large body of careful experimental measurements of two-point correlations of far field jet noise was carried out. The model of jet-noise generation is an approximate version of an earlier work of Ribner, based on the foundations of Lighthill. The model incorporates isotropic turbulence superimposed on a specified mean shear flow, with assumed space-time velocity correlations, but with source convection neglected. The particular vehicle is the Proudman format, and the previous work (mean-square pressure) is extended to display the two-point space-time correlations of pressure. The shape of polar plots of correlation is found to derive from two main factors: (1) the noncompactness of the source region, which allows differences in travel times to the two microphones - the dominant effect; (2) the directivities of the constituent quadrupoles - a weak effect. The noncompactness effect causes the directional lobes in a polar plot to have pointed tips (cusps) and to be especially narrow in the plane of the jet axis. In these respects, and in the quantitative shapes of the normalized correlation curves, results of the theory show generally good agreement with Maestrello's experimental measurements.

  6. Progress in the Development of a High Power Helicon Plasma Source for the Materials Plasma Exposure Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goulding, Richard Howell; Caughman, John B.; Rapp, Juergen

    Proto-MPEX is a linear plasma device being used to study a novel RF source concept for the planned Material Plasma Exposure eXperiment (MPEX), which will address plasma-materials interaction (PMI) for nuclear fusion reactors. Plasmas are produced using a large diameter helicon source operating at a frequency of 13.56 MHz at power levels up to 120 kW. In recent experiments the helicon source has produced deuterium plasmas with densities up to ~6 × 1019 m–3 measured at a location 2 m downstream from the antenna and 0.4 m from the target. Previous plasma production experiments on Proto-MPEX have generated lower densitymore » plasmas with hollow electron temperature profiles and target power deposition peaked far off axis. The latest experiments have produced flat Te profiles with a large portion of the power deposited on the target near the axis. This and other evidence points to the excitation of a helicon mode in this case.« less

  7. From the volcano effect to banding: a minimal model for bacterial behavioral transitions near chemoattractant sources

    NASA Astrophysics Data System (ADS)

    Javens, Gregory; Jashnsaz, Hossein; Pressé, Steve

    2018-07-01

    Sharp chemoattractant (CA) gradient variations near food sources may give rise to dramatic behavioral changes of bacteria neighboring these sources. For instance, marine bacteria exhibiting run-reverse motility are known to form distinct bands around patches (large sources) of chemoattractant such as nutrient-soaked beads while run-and-tumble bacteria have been predicted to exhibit a ‘volcano effect’ (spherical shell-shaped density) around a small (point) source of food. Here we provide the first minimal model of banding for run-reverse bacteria and show that, while banding and the volcano effect may appear superficially similar, they are different physical effects manifested under different source emission rate (and thus effective source size). More specifically, while the volcano effect is known to arise around point sources from a bacterium’s temporal differentiation of signal (and corresponding finite integration time), this effect alone is insufficient to account for banding around larger patches as bacteria would otherwise cluster around the patch without forming bands at some fixed radial distance. In particular, our model demonstrates that banding emerges from the interplay of run-reverse motility and saturation of the bacterium’s chemoreceptors to CA molecules and our model furthermore predicts that run-reverse bacteria susceptible to banding behavior should also exhibit a volcano effect around sources with smaller emission rates.

  8. Innovative design of parabolic reflector light guiding structure

    NASA Astrophysics Data System (ADS)

    Whang, Allen J.; Tso, Chun-Hsien; Chen, Yi-Yung

    2008-02-01

    Due to the idea of everlasting green architecture, it is of increasing importance to guild natural light into indoors. The advantages are multifold - to have better color rendering index, excellent energy savings from environments viewpoints and make humans more healthy, etc. Our search is to design an innovative structure, to convert outdoor sun light impinges on larger surfaces, into near linear light beam sources, later convert this light beam into near point sources which enters the indoor spaces then can be used as lighting sources indoors. We are not involved with the opto-electrical transformation, to the guild light into to the building, to perform the illumination, as well as the imaging function. Because non-imaging optics, well known for apply to the solar concentrators, that can use non-imaging structures to fulfill our needs, which can also be used as energy collectors in solar energy devices. Here, we have designed a pair of large and small parabolic reflector, which can be used to collect daylight and change area from large to small. Then we make a light-guide system that is been designed by us use of this parabolic reflector to guide the collection light, can pick up the performance for large surface source change to near linear source and a larger collection area.

  9. 75 FR 3183 - Approval and Promulgation of Air Quality Implementation Plan: Kentucky; Approval Section 110(a)(1...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-20

    ...) Federal motor vehicle control program; (2) fleet turnover of automobiles; (3) low reid vapor pressure of... vehicles standard; (6) large nonroad diesel engines rule; (7) nonroad spark ignition engines and recreational engines standard; (8) point source emission reductions; (9) Air Products and Chemicals -21-157...

  10. Wind Power: A Turning Point. Worldwatch Paper 45.

    ERIC Educational Resources Information Center

    Flavin, Christopher

    Recent studies have shown wind power to be an eminently practical and potentially substantial source of electricity and direct mechanical power. Wind machines range from simple water-pumping devices made of wood and cloth to large electricity producing turbines with fiberglass blades nearly 300 feet long. Wind is in effect a form of solar…

  11. Determining volume sensitive waters in Beaufort County, SC tidal creeks

    Treesearch

    Andrew Tweel; Denise Sanger; Anne Blair; John Leffler

    2016-01-01

    Non-point source pollution from stormwater runoff associated with large-scale land use changes threatens the integrity of ecologically and economically valuable estuarine ecosystems. Beaufort County, SC implemented volume-based stormwater regulations on the rationale that if volume discharge is controlled, contaminant loading will also be controlled.

  12. The suite of small-angle neutron scattering instruments at Oak Ridge National Laboratory

    DOE PAGES

    Heller, William T.; Cuneo, Matthew J.; Debeer-Schmitt, Lisa M.; ...

    2018-02-21

    Oak Ridge National Laboratory is home to the High Flux Isotope Reactor (HFIR), a high-flux research reactor, and the Spallation Neutron Source (SNS), the world's most intense source of pulsed neutron beams. The unique co-localization of these two sources provided an opportunity to develop a suite of complementary small-angle neutron scattering instruments for studies of large-scale structures: the GP-SANS and Bio-SANS instruments at the HFIR and the EQ-SANS and TOF-USANS instruments at the SNS. This article provides an overview of the capabilities of the suite of instruments, with specific emphasis on how they complement each other. As a result, amore » description of the plans for future developments including greater integration of the suite into a single point of entry for neutron scattering studies of large-scale structures is also provided.« less

  13. The suite of small-angle neutron scattering instruments at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heller, William T.; Cuneo, Matthew J.; Debeer-Schmitt, Lisa M.

    Oak Ridge National Laboratory is home to the High Flux Isotope Reactor (HFIR), a high-flux research reactor, and the Spallation Neutron Source (SNS), the world's most intense source of pulsed neutron beams. The unique co-localization of these two sources provided an opportunity to develop a suite of complementary small-angle neutron scattering instruments for studies of large-scale structures: the GP-SANS and Bio-SANS instruments at the HFIR and the EQ-SANS and TOF-USANS instruments at the SNS. This article provides an overview of the capabilities of the suite of instruments, with specific emphasis on how they complement each other. As a result, amore » description of the plans for future developments including greater integration of the suite into a single point of entry for neutron scattering studies of large-scale structures is also provided.« less

  14. A simplified approach to analyze the effectiveness of NO2 and SO2 emission reduction of coal-fired power plant from OMI retrievals

    NASA Astrophysics Data System (ADS)

    Bai, Yang; Wu, Lixin; Zhou, Yuan; Li, Ding

    2017-04-01

    Nitrogen oxides (NOX) and sulfur dioxide (SO2) emissions from coal combustion, which is oxidized quickly in the atmosphere resulting in secondary aerosol formation and acid deposition, are the main resource causing China's regional fog-haze pollution. Extensive literature has estimated quantitatively the lifetimes and emissions of NO2 and SO2 for large point sources such as coal-fired power plants and cities using satellite measurements. However, rare of these methods is suitable for sources located in a heterogeneously polluted background. In this work, we present a simplified emission effective radius extraction model for point source to study the NO2 and SO2 reduction trend in China with complex polluted sources. First, to find out the time range during which actual emissions could be derived from satellite observations, the spatial distribution characteristics of mean daily, monthly, seasonal and annual concentration of OMI NO2 and SO2 around a single power plant were analyzed and compared. Then, a 100 km × 100 km geographical grid with a 1 km step was established around the source and the mean concentration of all satellite pixels covered in each grid point is calculated by the area weight pixel-averaging approach. The emission effective radius is defined by the concentration gradient values near the power plant. Finally, the developed model is employed to investigate the characteristic and evolution of NO2 and SO2 emissions and verify the effectiveness of flue gas desulfurization (FGD) and selective catalytic reduction (SCR) devices applied in coal-fired power plants during the period of 10 years from 2006 to 2015. It can be observed that the the spatial distribution pattern of NO2 and SO2 concentration in the vicinity of large coal-burning source was not only affected by the emission of coal-burning itself, but also closely related to the process of pollutant transmission and diffusion caused by meteorological factors in different seasons. Our proposed model can be used to identify the effective operation time of FGD and SCR equipped in coal-fired power plant.

  15. Development of the Model of Galactic Interstellar Emission for Standard Point-Source Analysis of Fermi Large Area Telescope Data

    DOE PAGES

    Acero, F.

    2016-04-22

    Most of the celestial γ rays detected by the Large Area Telescope (LAT) aboard the Fermi Gamma-ray Space Telescope originate from the interstellar medium when energetic cosmic rays interact with interstellar nucleons and photons. Conventional point and extended source studies rely on the modeling of this diffuse emission for accurate characterization. We describe here the development of the Galactic Interstellar Emission Model (GIEM) that is the standard adopted by the LAT Collaboration and is publicly available. The model is based on a linear combination of maps for interstellar gas column density in Galactocentric annuli and for the inverse Compton emissionmore » produced in the Galaxy. We also include in the GIEM large-scale structures like Loop I and the Fermi bubbles. The measured gas emissivity spectra con rm that the cosmic-ray proton density decreases with Galactocentric distance beyond 5 kpc from the Galactic Center. The measurements also suggest a softening of the proton spectrum with Galactocentric distance. We observe that the Fermi bubbles have boundaries with a shape similar to a catenary at latitudes below 20° and we observe an enhanced emission toward their base extending in the North and South Galactic direction and located within ~4° of the Galactic Center.« less

  16. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  17. A Point Kinetics Model for Estimating Neutron Multiplication of Bare Uranium Metal in Tagged Neutron Measurements

    DOE PAGES

    Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.

    2017-06-13

    An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less

  18. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  19. Comparing stochastic point-source and finite-source ground-motion simulations: SMSIM and EXSIM

    USGS Publications Warehouse

    Boore, D.M.

    2009-01-01

    Comparisons of ground motions from two widely used point-source and finite-source ground-motion simulation programs (SMSIM and EXSIM) show that the following simple modifications in EXSIM will produce agreement in the motions from a small earthquake at a large distance for the two programs: (1) base the scaling of high frequencies on the integral of the squared Fourier acceleration spectrum; (2) do not truncate the time series from each subfault; (3) use the inverse of the subfault corner frequency for the duration of motions from each subfault; and (4) use a filter function to boost spectral amplitudes at frequencies near and less than the subfault corner frequencies. In addition, for SMSIM an effective distance is defined that accounts for geometrical spreading and anelastic attenuation from various parts of a finite fault. With these modifications, the Fourier and response spectra from SMSIM and EXSIM are similar to one another, even close to a large earthquake (M 7), when the motions are averaged over a random distribution of hypocenters. The modifications to EXSIM remove most of the differences in the Fourier spectra from simulations using pulsing and static subfaults; they also essentially eliminate any dependence of the EXSIM simulations on the number of subfaults. Simulations with the revised programs suggest that the results of Atkinson and Boore (2006), computed using an average stress parameter of 140 bars and the original version of EXSIM, are consistent with the revised EXSIM with a stress parameter near 250 bars.

  20. Y-MP floating point and Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Carter, Russell

    1991-01-01

    The floating point arithmetics implemented in the Cray 2 and Cray Y-MP computer systems are nearly identical, but large scale computations performed on the two systems have exhibited significant differences in accuracy. The difference in accuracy is analyzed for Cholesky factorization algorithm, and it is found that the source of the difference is the subtract magnitude operation of the Cray Y-MP. The results from numerical experiments for a range of problem sizes are presented, and an efficient method for improving the accuracy of the factorization obtained on the Y-MP is presented.

  1. A NEW RESULT ON THE ORIGIN OF THE EXTRAGALACTIC GAMMA-RAY BACKGROUND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Ming; Wang Jiancheng, E-mail: mzhou@ynao.ac.cn

    2013-06-01

    In this paper, we repeatedly use the method of image stacking to study the origin of the extragalactic gamma-ray background (EGB) at GeV bands, and find that the Faint Images of the Radio Sky at Twenty centimeters (FIRST) sources undetected by the Large Area Telescope on the Fermi Gamma-ray Space Telescope can contribute about (56 {+-} 6)% of the EGB. Because FIRST is a flux-limited sample of radio sources with incompleteness at the faint limit, we consider that point sources, including blazars, non-blazar active galactic nuclei, and starburst galaxies, could produce a much larger fraction of the EGB.

  2. Radial Distribution of X-Ray Point Sources Near the Galactic Center

    NASA Astrophysics Data System (ADS)

    Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-01

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.

  3. Realization of the Gallium Triple Point at NMIJ/AIST

    NASA Astrophysics Data System (ADS)

    Nakano, T.; Tamura, O.; Sakurai, H.

    2008-02-01

    The triple point of gallium has been realized by a calorimetric method using capsule-type standard platinum resistance thermometers (CSPRTs) and a small glass cell containing about 97 mmol (6.8 g) of gallium with a nominal purity of 99.99999%. The melting curve shows a very flat and relatively linear dependence on 1/ F in the region from 1/ F = 1 to 1/ F = 20 with a narrow width of the melting curve within 0.1 mK. Also, a large gallium triple-point cell was fabricated for the calibration of client-owned CSPRTs. The gallium triple-point cell consists of a PTFE crucible and a PTFE cap with a re-entrant well and a small vent. The PTFE cell contains 780 g of gallium from the same source as used for the small glass cell. The PTFE cell is completely covered by a stainless-steel jacket with a valve to enable evacuation of the cell. The melting curve of the large cell shows a flat plateau that remains within 0.03 mK over 10 days and that is reproducible within 0.05 mK over 8 months. The calibrated value of a CSPRT obtained using the large cell agrees with that obtained using the small glass cell within the uncertainties of the calibrations.

  4. Placebo effects in trials evaluating 12 selected minimally invasive interventions: a systematic review and meta-analysis

    PubMed Central

    Holtedahl, Robin; Brox, Jens Ivar; Tjomsland, Ole

    2015-01-01

    Objectives To analyse the impact of placebo effects on outcome in trials of selected minimally invasive procedures and to assess reported adverse events in both trial arms. Design A systematic review and meta-analysis. Data sources and study selection We searched MEDLINE and Cochrane library to identify systematic reviews of musculoskeletal, neurological and cardiac conditions published between January 2009 and January 2014 comparing selected minimally invasive with placebo (sham) procedures. We searched MEDLINE for additional randomised controlled trials published between January 2000 and January 2014. Data synthesis Effect sizes (ES) in the active and placebo arms in the trials’ primary and pooled secondary end points were calculated. Linear regression was used to analyse the association between end points in the active and sham groups. Reported adverse events in both trial arms were registered. Results We included 21 trials involving 2519 adult participants. For primary end points, there was a large clinical effect (ES≥0.8) after active treatment in 12 trials and after sham procedures in 11 trials. For secondary end points, 7 and 5 trials showed a large clinical effect. Three trials showed a moderate difference in ES between active treatment and sham on primary end points (ES ≥0.5) but no trials reported a large difference. No trials showed large or moderate differences in ES on pooled secondary end points. Regression analysis of end points in active treatment and sham arms estimated an R2 of 0.78 for primary and 0.84 for secondary end points. Adverse events after sham were in most cases minor and of short duration. Conclusions The generally small differences in ES between active treatment and sham suggest that non-specific mechanisms, including placebo, are major predictors of the observed effects. Adverse events related to sham procedures were mainly minor and short-lived. Ethical arguments frequently raised against sham-controlled trials were generally not substantiated. PMID:25636794

  5. The effect of baryons in the cosmological lensing PDFs

    NASA Astrophysics Data System (ADS)

    Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus

    2018-07-01

    Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations, we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple images by a factor of 5-500, depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested to guarantee that our uncertainties are much smaller than the effects here presented.

  6. The effect of baryons in the cosmological lensing PDFs

    NASA Astrophysics Data System (ADS)

    Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus

    2018-05-01

    Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple-images by a factor 5 - 500 depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested in order to guarantee that our uncertainties are much smaller than the effects here presented.

  7. Characteristics of large three-dimensional heaps of particles produced by ballistic deposition from extended sources

    NASA Astrophysics Data System (ADS)

    Topic, Nikola; Gallas, Jason A. C.; Pöschel, Thorsten

    2013-11-01

    This paper reports a detailed numerical investigation of the geometrical and structural properties of three-dimensional heaps of particles. Our goal is the characterization of very large heaps produced by ballistic deposition from extended circular dropping areas. First, we provide an in-depth study of the formation of monodisperse heaps of particles. We find very large heaps to contain three new geometrical characteristics: they may display two external angles of repose, one internal angle of repose, and four distinct packing fraction (density) regions. Such features are found to be directly connected with the size of the dropping zone. We derive a differential equation describing the boundary of an unexpected triangular packing fraction zone formed under the dropping area. We investigate the impact that noise during the deposition has on the final heap structure. In addition, we perform two complementary experiments designed to test the robustness of the novel features found. The first experiment considers changes due to polydispersity. The second checks what happens when letting the extended dropping zone to become a point-like source of particles, the more common type of source.

  8. An Evolving Choice in a Diverse Water Market: A Quality Comparison of Sachet Water with Community and Household Water Sources in Ghana.

    PubMed

    Guzmán, Danice; Stoler, Justin

    2018-06-11

    Packaged water, particularly bagged sachet water, has become an important drinking water source in West Africa as local governments struggle to provide safe drinking water supplies. In Ghana, sachet water has become an important primary water source in urban centers, and a growing literature has explored various dimensions of this industry, including product quality. There is very little data on sachet water quality outside of large urban centers, where smaller markets often mean less producer competition and less government regulation. This study analyzes the microbiological quality of sachet water alongside samples of other common water sources at point-of-collection (POC) and point-of-use (POU) in 42 rural, peri-urban, and small-town Ghanaian communities using the IDEXX Colilert ® 18. Levels of coliform bacteria and Escherichia coli detected in sachet water samples were statistically and significantly lower than levels detected in all other water sources at POU, including public taps and standpipes, and statistically similar or significantly lower at POC. In diverse waterscapes where households regularly patch together their water supply from different sources, sachet water appears to be an evolving alternative for safe drinking water despite many caveats, including higher unit costs and limited opportunities to recycle the plastic packaging.

  9. Understanding tungsten divertor sourcing and SOL transport using multiple poloidally-localized sources in DIII-D ELM-y H-mode discharges

    NASA Astrophysics Data System (ADS)

    Unterberg, Ea; Donovan, D.; Barton, J.; Wampler, Wr; Abrams, T.; Thomas, Dm; Petrie, T.; Guo, Hy; Stangeby, Pg; Elder, Jd; Rudakov, D.; Grierson, B.; Victor, B.

    2017-10-01

    Experiments using metal inserts with novel isotopically-enriched tungsten coatings at the outer divertor strike point (OSP) have provided unique insight into the ELM-induced sourcing, main-SOL transport, and core accumulation control mechanisms of W for a range of operating conditions. This experimental approach has used a multi-head, dual-facing collector probe (CP) at the outboard midplane, as well as W-I and core W spectroscopy. Using the CP system, the total amount of W deposited relative to source measurements shows a clear dependence on ELM size, ELM frequency, and strike point location, with large ELMs depositing significantly more W on the CP from the far-SOL source. Additionally, high spatial ( 1mm) and ELM resolved spectroscopic measurements of W sourcing indicate shifts in the peak erosion rate. Furthermore, high performance discharges with rapid ELMs show core W concentrations of few 10-5, and the CP deposition profile indicates W is predominantly transported to the midplane from the OSP rather than from the far-SOL region. The low central W concentration is shown to be due to flattening of the main plasma density profile, presumably by on-axis electron cyclotron heating. Work supported under USDOE Cooperative Agreement DE-FC02-04ER54698.

  10. Myocardial Drug Distribution Generated from Local Epicardial Application: Potential Impact of Cardiac Capillary Perfusion in a Swine Model Using Epinephrine

    PubMed Central

    Maslov, Mikhail Y.; Edelman, Elazer R.; Pezone, Matthew J.; Wei, Abraham E.; Wakim, Matthew G.; Murray, Michael R.; Tsukada, Hisashi; Gerogiannis, Iraklis S.; Groothuis, Adam; Lovich, Mark A.

    2014-01-01

    Prior studies in small mammals have shown that local epicardial application of inotropic compounds drives myocardial contractility without systemic side effects. Myocardial capillary blood flow, however, may be more significant in larger species than in small animals. We hypothesized that bulk perfusion in capillary beds of the large mammalian heart enhances drug distribution after local release, but also clears more drug from the tissue target than in small animals. Epicardial (EC) drug releasing systems were used to apply epinephrine to the anterior surface of the left heart of swine in either point-sourced or distributed configurations. Following local application or intravenous (IV) infusion at the same dose rates, hemodynamic responses, epinephrine levels in the coronary sinus and systemic circulation, and drug deposition across the ventricular wall, around the circumference and down the axis, were measured. EC delivery via point-source release generated transmural epinephrine gradients directly beneath the site of application extending into the middle third of the myocardial thickness. Gradients in drug deposition were also observed down the length of the heart and around the circumference toward the lateral wall, but not the interventricular septum. These gradients extended further than might be predicted from simple diffusion. The circumferential distribution following local epinephrine delivery from a distributed source to the entire anterior wall drove drug toward the inferior wall, further than with point-source release, but again, not to the septum. This augmented drug distribution away from the release source, down the axis of the left ventricle, and selectively towards the left heart follows the direction of capillary perfusion away from the anterior descending and circumflex arteries, suggesting a role for the coronary circulation in determining local drug deposition and clearance. The dominant role of the coronary vasculature is further suggested by the elevated drug levels in the coronary sinus effluent. Indeed, plasma levels, hemodynamic responses, and myocardial deposition remote from the point of release were similar following local EC or IV delivery. Therefore, the coronary vasculature shapes the pharmacokinetics of local myocardial delivery of small catecholamine drugs in large animal models. Optimal design of epicardial drug delivery systems must consider the underlying bulk capillary perfusion currents within the tissue to deliver drug to tissue targets and may favor therapeutic molecules with better potential retention in myocardial tissue. PMID:25234821

  11. NO x emissions from large point sources: variability in ozone production, resulting health damages and economic costs

    NASA Astrophysics Data System (ADS)

    Mauzerall, Denise L.; Sultan, Babar; Kim, Namsoug; Bradford, David F.

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O 3) produced from nitrogen oxides (NO=NO+NO) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NO x emitted from individual sources can have on the downwind concentration of surface O 3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting O 3-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used "cap and trade" approach to NO x regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O 3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NO x emissions from one place or time to another could result in large changes in resulting health effects due to O 3 formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NO x emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.

  12. Export of microplastics from land to sea. A modelling approach.

    PubMed

    Siegfried, Max; Koelmans, Albert A; Besseling, Ellen; Kroeze, Carolien

    2017-12-15

    Quantifying the transport of plastic debris from river to sea is crucial for assessing the risks of plastic debris to human health and the environment. We present a global modelling approach to analyse the composition and quantity of point-source microplastic fluxes from European rivers to the sea. The model accounts for different types and sources of microplastics entering river systems via point sources. We combine information on these sources with information on sewage management and plastic retention during river transport for the largest European rivers. Sources of microplastics include personal care products, laundry, household dust and tyre and road wear particles (TRWP). Most of the modelled microplastics exported by rivers to seas are synthetic polymers from TRWP (42%) and plastic-based textiles abraded during laundry (29%). Smaller sources are synthetic polymers and plastic fibres in household dust (19%) and microbeads in personal care products (10%). Microplastic export differs largely among European rivers, as a result of differences in socio-economic development and technological status of sewage treatment facilities. About two-thirds of the microplastics modelled in this study flow into the Mediterranean and Black Sea. This can be explained by the relatively low microplastic removal efficiency of sewage treatment plants in the river basins draining into these two seas. Sewage treatment is generally more efficient in river basins draining into the North Sea, the Baltic Sea and the Atlantic Ocean. We use our model to explore future trends up to the year 2050. Our scenarios indicate that in the future river export of microplastics may increase in some river basins, but decrease in others. Remarkably, for many basins we calculate a reduction in river export of microplastics from point-sources, mainly due to an anticipated improvement in sewage treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Civilization, Big History, and Human Survival

    ERIC Educational Resources Information Center

    Rodrigue, Barry H.

    2010-01-01

    A problem that history teachers in the United States face is that they lack an appropriate reference point from which to address many of today's global issues. The source of this problem is an antiquated model of society, still taught in the universities, that largely reflects the society that existed a century ago. For the last decade, the author…

  14. Study of Amorphous Ferrimagnet Fe0.66Er0.19B0.15 by Means of Monochromatic Circularly Polarised Source

    NASA Astrophysics Data System (ADS)

    Kalska, B.; Szymański, K.; Dobrzyński, L.; Satuła, D.; Wäppling, R.; Broddefalk, A.; Nordblad, P.

    2002-06-01

    Properties of amorphous alloy Fe0.66Er0.19B0.15 are reported. A reorientation of the Fe and Er magnetic moments during sample cooling through the compensation point in a large magnetic field is found by means of monochromatic circularly polarised radiation.

  15. Application of a water quality model in the White Cart water catchment, Glasgow, UK.

    PubMed

    Liu, S; Tucker, P; Mansell, M; Hursthouse, A

    2003-03-01

    Water quality models of urban systems have previously focused on point source (sewerage system) inputs. Little attention has been given to diffuse inputs and research into diffuse pollution has been largely confined to agriculture sources. This paper reports on new research that is aimed at integrating diffuse inputs into an urban water quality model. An integrated model is introduced that is made up of four modules: hydrology, contaminant point sources, nutrient cycling and leaching. The hydrology module, T&T consists of a TOPMODEL (a TOPography-based hydrological MODEL), which simulates runoff from pervious areas and a two-tank model, which simulates runoff from impervious urban areas. Linked into the two-tank model, the contaminant point source module simulates the overflow from the sewerage system in heavy rain. The widely known SOILN (SOIL Nitrate model) is the basis of nitrogen cycle module. Finally, the leaching module consists of two functions: the production function and the transfer function. The production function is based on SLIM (Solute Leaching Intermediate Model) while the transfer function is based on the 'flushing hypothesis' which postulates a relationship between contaminant concentrations in the receiving water course and the extent to which the catchment is saturated. This paper outlines the modelling methodology and the model structures that have been developed. An application of this model in the White Cart catchment (Glasgow) is also included.

  16. Isolating intrinsic noise sources in a stochastic genetic switch.

    PubMed

    Newby, Jay M

    2012-01-01

    The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small timescales, the stochastic process fluctuates near one of the stable fixed points, and on large timescales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. The process without protein noise is then compared to the process with weak protein noise using perturbation methods and Monte Carlo simulations. It is found that some significant differences in the random process emerge when the intrinsic noise source is removed.

  17. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    NASA Astrophysics Data System (ADS)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  18. The Chandra Source Catalog: Statistical Characterization

    NASA Astrophysics Data System (ADS)

    Primini, Francis A.; Nowak, M. A.; Houck, J. C.; Davis, J. E.; Glotfelty, K. J.; Karovska, M.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) will ultimately contain more than ˜250000 x-ray sources in a total area of ˜1% of the entire sky, using data from ˜10000 separate ACIS and HRC observations of a multitude of different types of x-ray sources (see Evans et al. this conference). In order to maximize the scientific benefit of such a large, heterogeneous dataset, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Our Characterization efforts include both extensive simulations of blank-sky and point source datasets, and detailed comparisons of CSC results with those of other x-ray and optical catalogs. We present here a summary of our characterization results for CSC Release 1 and preliminary plans for future releases. This work is supported by NASA contract NAS8-03060 (CXC).

  19. Estimating global and North American methane emissions with high spatial resolution using GOSAT satellite data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, A. J.; Jacob, D. J.; Wecht, K. J.

    2015-02-18

    We use 2009–2011 space-borne methane observations from the Greenhouse Gases Observing SATellite (GOSAT) to constrain global and North American inversions of methane emissions with 4° × 5° and up to 50 km × 50 km spatial resolution, respectively. The GOSAT data are first evaluated with atmospheric methane observations from surface networks (NOAA, TCCON) and aircraft (NOAA/DOE, HIPPO), using the GEOS-Chem chemical transport model as a platform to facilitate comparison of GOSAT with in situ data. This identifies a high-latitude bias between the GOSAT data and GEOS-Chem that we correct via quadratic regression. The surface and aircraft data are subsequently usedmore » for independent evaluation of the methane source inversions. Our global adjoint-based inversion yields a total methane source of 539 Tg a −1 and points to a large East Asian overestimate in the EDGARv4.2 inventory used as a prior. Results serve as dynamic boundary conditions for an analytical inversion of North American methane emissions using radial basis functions to achieve high resolution of large sources and provide full error characterization. We infer a US anthropogenic methane source of 40.2–42.7 Tg a −1, as compared to 24.9–27.0 Tg a −1 in the EDGAR and EPA bottom-up inventories, and 30.0–44.5 Tg a −1 in recent inverse studies. Our estimate is supported by independent surface and aircraft data and by previous inverse studies for California. We find that the emissions are highest in the South-Central US, the Central Valley of California, and Florida wetlands, large isolated point sources such as the US Four Corners also contribute. We attribute 29–44% of US anthropogenic methane emissions to livestock, 22–31% to oil/gas, 20% to landfills/waste water, and 11–15% to coal with an additional 9.0–10.1 Tg a −1 source from wetlands.« less

  20. The Large Quasar Reference Frame (LQRF). An Optical Representation of the ICRS

    DTIC Science & Technology

    2009-10-01

    faint regimes, both the 2MASS and the preliminary northernmost UCAC2 positions are shown of astrometry consistent with the UCAC2 main catalog, and the...is used. 2.7. 2MASS The Two Micron All-Sky Survey point source catalog (Cutri et al. 2003), hereafter 2MASS , derives from an uniform scan of the...17.1, H = 16.4, and K = 15.3. The 2MASS contains the position of 470 992 970 sources, but no proper motions. The astrometry is referred to the

  1. Impact of structural and autocyclic basin-floor topography on the depositional evolution of the deep-water Valparaiso forearc basin, central Chile

    USGS Publications Warehouse

    Laursen, J.; Normark, W.R.

    2003-01-01

    The Valparaiso Basin constitutes a unique and prominent deep-water forearc basin underlying a 40-km by 60-km mid-slope terrace at 2.5-km water depth on the central Chile margin. Seismic-reflection data, collected as part of the CONDOR investigation, image a 3-3.5-km thick sediment succession that fills a smoothly sagged, margin-parallel, elongated trough at the base of the upper slope. In response to underthrusting of the Juan Ferna??ndez Ridge on the Nazca plate, the basin fill is increasingly deformed in the seaward direction above seaward-vergent outer forearc compressional highs. Syn-depositional growth of a large, margin-parallel monoclinal high in conjunction with sagging of the inner trough of the basin created stratal geometries similar to those observed in forearc basins bordered by large accretionary prisms. Margin-parallel compressional ridges diverted turbidity currents along the basin axis and exerted a direct control on sediment depositional processes. As structural depressions became buried, transverse input from point sources on the adjacent upper slope formed complex fan systems with sediment waves characterising the overbank environment, common on many Pleistocene turbidite systems. Mass failure as a result of local topographic inversion formed a prominent mass-flow deposit, and ultimately resulted in canyon formation and hence a new focused point source feeding the basin. The Valparaiso Basin is presently filled to the spill point of the outer forearc highs, causing headward erosion of incipient canyons into the basin fill and allowing bypass of sediment to the Chile Trench. Age estimates that are constrained by subduction-related syn-depositional deformation of the upper 700-800m of the basin fill suggest that glacio-eustatic sea-level lowstands, in conjunction with accelerated denudation rates, within the past 350 ka may have contributed to the increase in simultaneously active point sources along the upper slope as well as an increased complexity of proximal depositional facies.

  2. Radiation boundary condition and anisotropy correction for finite difference solutions of the Helmholtz equation

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Webb, Jay C.

    1994-01-01

    In this paper finite-difference solutions of the Helmholtz equation in an open domain are considered. By using a second-order central difference scheme and the Bayliss-Turkel radiation boundary condition, reasonably accurate solutions can be obtained when the number of grid points per acoustic wavelength used is large. However, when a smaller number of grid points per wavelength is used excessive reflections occur which tend to overwhelm the computed solutions. Excessive reflections are due to the incompability between the governing finite difference equation and the Bayliss-Turkel radiation boundary condition. The Bayliss-Turkel radiation boundary condition was developed from the asymptotic solution of the partial differential equation. To obtain compatibility, the radiation boundary condition should be constructed from the asymptotic solution of the finite difference equation instead. Examples are provided using the improved radiation boundary condition based on the asymptotic solution of the governing finite difference equation. The computed results are free of reflections even when only five grid points per wavelength are used. The improved radiation boundary condition has also been tested for problems with complex acoustic sources and sources embedded in a uniform mean flow. The present method of developing a radiation boundary condition is also applicable to higher order finite difference schemes. In all these cases no reflected waves could be detected. The use of finite difference approximation inevita bly introduces anisotropy into the governing field equation. The effect of anisotropy is to distort the directional distribution of the amplitude and phase of the computed solution. It can be quite large when the number of grid points per wavelength used in the computation is small. A way to correct this effect is proposed. The correction factor developed from the asymptotic solutions is source independent and, hence, can be determined once and for all. The effectiveness of the correction factor in providing improvements to the computed solution is demonstrated in this paper.

  3. Site correction of a high-frequency strong-ground-motion simulation based on an empirical transfer function

    NASA Astrophysics Data System (ADS)

    Huang, Jyun-Yan; Wen, Kuo-Liang; Lin, Che-Min; Kuo, Chun-Hsiang; Chen, Chun-Te; Chang, Shuen-Chiang

    2017-05-01

    In this study, an empirical transfer function (ETF), which is the spectrum difference in Fourier amplitude spectra between observed strong ground motion and synthetic motion obtained by a stochastic point-source simulation technique, is constructed for the Taipei Basin, Taiwan. The basis stochastic point-source simulations can be treated as reference rock site conditions in order to consider site effects. The parameters of the stochastic point-source approach related to source and path effects are collected from previous well-verified studies. A database of shallow, small-magnitude earthquakes is selected to construct the ETFs so that the point-source approach for synthetic motions might be more widely applicable. The high-frequency synthetic motion obtained from the ETF procedure is site-corrected in the strong site-response area of the Taipei Basin. The site-response characteristics of the ETF show similar responses as in previous studies, which indicates that the base synthetic model is suitable for the reference rock conditions in the Taipei Basin. The dominant frequency contour corresponds to the shape of the bottom of the geological basement (the top of the Tertiary period), which is the Sungshan formation. Two clear high-amplification areas are identified in the deepest region of the Sungshan formation, as shown by an amplification contour of 0.5 Hz. Meanwhile, a high-amplification area was shifted to the basin's edge, as shown by an amplification contour of 2.0 Hz. Three target earthquakes with different kinds of source conditions, including shallow small-magnitude events, shallow and relatively large-magnitude events, and deep small-magnitude events relative to the ETF database, are tested to verify site correction. The results indicate that ETF-based site correction is effective for shallow earthquakes, even those with higher magnitudes, but is not suitable for deep earthquakes. Finally, one of the most significant shallow large-magnitude earthquakes (the 1999 Chi-Chi earthquake in Taiwan) is verified in this study. A finite fault stochastic simulation technique is applied, owing to the complexity of the fault rupture process for the Chi-Chi earthquake, and the ETF-based site-correction function is multiplied to obtain a precise simulation of high-frequency (up to 10 Hz) strong motions. The high-frequency prediction has good agreement in both time and frequency domain in this study, and the prediction level is the same as that predicted by the site-corrected ground motion prediction equation.

  4. [Numerical simulation study of SOA in Pearl River Delta region].

    PubMed

    Cheng, Yan-li; Li, Tian-tian; Bai, Yu-hua; Li, Jin-long; Liu, Zhao-rong; Wang, Xue-song

    2009-12-01

    Secondary organic aerosols (SOA) is an important component of the atmospheric particle pollution, thus, determining the status and sources of SOA pollution is the premise of deeply understanding the occurrence, development law and the influence factors of the atmospheric particle pollution. Based on the pollution sources and meteorological data of Pearl River Delta region, the study used the two-dimensional model coupled with SOA module to stimulate the status and source of SOA pollution in regional scale. The results show: the generation of SOA presents obvious characteristics of photochemical reaction, and the high concentration appears at about 14:00; SOA concentration is high in some areas of Guangshou and Dongguan with large pollution source-emission, and it is also high in some areas of Zhongshan, Zhuhai and Jiangmen which are at downwind position of Guangzhou and Dongguan. Contribution ratios of several main pollution sources to SOA are: biogenic sources 72.6%, mobile sources 30.7%, point sources 12%, solvent and oil paint sources 12%, surface sources less than 5% respectively.

  5. Statistical signatures of a targeted search by bacteria

    NASA Astrophysics Data System (ADS)

    Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve

    2017-12-01

    Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.

  6. National Emissions Inventory (NEI), County-Level, US, 2008, 2011, 2014, EPA OAR, OAPQS

    EPA Pesticide Factsheets

    This US EPA Office of Air and Radiation, Office of Air Quality Planning and Standards, Air Quality Assessment Division, Air Quality Analysis Group (OAR, OAQPS, AQAD, AQAG) web service contains the following layers created from the 2008, 2011 and 2014 National Emissions Inventory (NEI): Carbon Monoxide (CO), Lead, Ammonia (NH3), Nitrogen Oxides (NOx), Particulate Matter 10 (PM10), Particulate Matter 2.5 (PM2.5), Sulfur Dioxide (SO2), Volatile Organic Compounds (VOC). Each of these layers conatin county level emissions for 2008, 2011, and 2014. Layers are drawn at all scales. The National Emission Inventory (NEI) is a comprehensive and detailed estimate of air emissions of criteria pollutants, criteria precursors, and hazardous air pollutants from air emissions sources. The NEI is released every three years based primarily upon data provided by State, Local, and Tribal air agencies for sources in their jurisdictions and supplemented by data developed by the US EPA. The NEI is built using the Emissions Inventory System (EIS) first to collect the data from State, Local, and Tribal air agencies and then to blend that data with other data sources.NEI point sources include emissions estimates for larger sources that are located at a fixed, stationary location. Point sources in the NEI include large industrial facilities and electric power plants, airports, and smaller industrial, non-industrial and commercial facilities. A small number of portable sources such as s

  7. Restoration of the ASCA Source Position Accuracy

    NASA Astrophysics Data System (ADS)

    Gotthelf, E. V.; Ueda, Y.; Fujimoto, R.; Kii, T.; Yamaoka, K.

    2000-11-01

    We present a calibration of the absolute pointing accuracy of the Advanced Satellite for Cosmology and Astrophysics (ASCA) which allows us to compensate for a large error (up to 1') in the derived source coordinates. We parameterize a temperature dependent deviation of the attitude solution which is responsible for this error. By analyzing ASCA coordinates of 100 bright active galactic nuclei, we show that it is possible to reduce the uncertainty in the sky position for any given observation by a factor of 4. The revised 90% error circle radius is then 12", consistent with preflight specifications, effectively restoring the full ASCA pointing accuracy. Herein, we derive an algorithm which compensates for this attitude error and present an internet-based table to be used to correct post facto the coordinate of all ASCA observations. While the above error circle is strictly applicable to data taken with the on-board Solid-state Imaging Spectrometers (SISs), similar coordinate corrections are derived for data obtained with the Gas Imaging Spectrometers (GISs), which, however, have additional instrumental uncertainties. The 90% error circle radius for the central 20' diameter of the GIS is 24". The large reduction in the error circle area for the two instruments offers the opportunity to greatly enhance the search for X-ray counterparts at other wavelengths. This has important implications for current and future ASCA source catalogs and surveys.

  8. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  9. Deep JVLA Imaging of GOODS-N at 20 cm

    NASA Astrophysics Data System (ADS)

    Owen, Frazer N.

    2018-04-01

    New wideband continuum observations in the 1–2 GHz band of the GOODS-N field using NSF’s Karl G. Jansky Very Large Array (VLA) are presented. The best image with an effective frequency of 1525 MHz reaches an rms noise in the field center of 2.2 μJy, with 1.″6 resolution. A catalog of 795 sources is presented covering a radius of 9 arcminutes centered near the nominal center for the GOODS-N field, very near the nominal VLA pointing center for the observations. Optical/NIR identifications and redshift estimates both from ground-based and HST observations are discussed. Using these optical/NIR data, it is most likely that fewer than 2% of the sources without confusion problems do not have a correct identification. A large subset of the detected sources have radio sizes >1″. It is shown that the radio orientations for such sources correlate well with the HST source orientations, especially for z < 1. This suggests that a least a large subset of the 10 kpc-scale disks of luminous infrared/ultraluminous infrared galaxies (LIRG/ULIRG) have strong star formation, not just in the nucleus. For the half of the objects with z > 1, the sample must be some mixture of very high star formation rates, typically 300 M ⊙ yr‑1, assuming pure star formation, and an active galactic nucleus (AGN) or a mixed AGN/star formation population.

  10. Characterization of mercury contamination in the Androscoggin River, Coos County, New Hampshire

    USGS Publications Warehouse

    Chalmers, Ann; Marvin-DiPasquale, Mark C.; Degnan, James R.; Coles, James; Agee, Jennifer L.; Luce, Darryl

    2013-01-01

    Concentrations of total mercury (THg) and MeHg in sediment, pore water, and biota in the Androscoggin River were elevated downstream from the former chloralkali facility compared with those upstream from reference sites. Sequential extraction of surface sediment showed a distinct difference in Hg speciation upstream compared with downstream from the contamination site. An upstream site was dominated by potassium hydroxide-extractable forms (for example, organic-Hg or particle-bound Hg(II)), whereas sites downstream from the point source were dominated by more chemically recalcitrant forms (largely concentrated nitric acid-extractable), indicative of elemental mercury or mercurous chloride. At all sites, only a minor fraction (less than 0.1 percent) of THg existed in chemically labile forms (for example, water extractable or weak acid extractable). All metrics indicated that a greater percentage of mercury at an upstream site was available for Hg(II)-methylation compared with sites downstream from the point source, but the absolute concentration of bioavailable Hg(II) was greater downstream from the point source. In addition, the concentration of tin-reducible inorganic reactive mercury, a surrogate measure of bioavailable Hg(II) generally increased with distance downstream from the point source. Whereas concentrations of mercury species on a sediment-dry-weight basis generally reflected the relative location of the sample to the point source, river-reach integrated mercury-species inventories and MeHg production potential (MPP) rates reflected the amount of fine-grained sediment in a given reach. THg concentrations in biota were significantly higher downstream from the point source compared with upstream reference sites for smallmouth bass, white sucker, crayfish, oligochaetes, bat fur, nestling tree swallow blood and feathers, adult tree swallow blood, and tree swallow eggs. As with tin-reducible inorganic reactive mercury, THg in smallmouth bass also increased with distance downstream from the point source. Toxicity tests and invertebrate community assessments suggested that invertebrates were not impaired at the current (2009 and 2010) levels of mercury contamination downstream from the point source. Concentrations of THg and MeHg in most water and sediment samples from the Androscoggin River were below U.S. Environmental Protection Agency (USEPA), the Canadian Council of Ministers of the Environment, and probable effects level guidelines. Surface-water and sediment samples from the Androscoggin River had similar THg concentrations but lower MeHg concentrations compared with other rivers in the region. Concentrations of THg in fish tissue were all above regional and U.S. Environmental Protection Agency guidelines. Moreover, median THg concentrations in smallmouth bass from the Androscoggin River were significantly higher than those reported in regional surveys of river and streams nationwide and in the Northeastern United States and Canada. The higher concentrations of mercury in smallmouth bass suggest conditions may be more favorable for Hg(II)-methylation and bioaccumulation in the Androscoggin River compared with many other rivers in the United States and Canada.

  11. Experimental Evaluation of the "Polished Panel Optical Receiver" Concept on the Deep Space Network's 34 Meter Antenna

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.

    2012-01-01

    The potential development of large aperture ground-based "photon bucket" optical receivers for deep space communications has received considerable attention recently. One approach currently under investigation proposes to polish the aluminum reflector panels of 34-meter microwave antennas to high reflectance, and accept the relatively large spotsize generated by even state-of-the-art polished aluminum panels. Here we describe the experimental effort currently underway at the Deep Space Network (DSN) Goldstone Communications Complex in California, to test and verify these concepts in a realistic operational environment. A custom designed aluminum panel has been mounted on the 34 meter research antenna at Deep-Space Station 13 (DSS-13), and a remotely controlled CCD camera with a large CCD sensor in a weather-proof container has been installed next to the subreflector, pointed directly at the custom polished panel. Using the planet Jupiter as the optical point-source, the point-spread function (PSF) generated by the polished panel has been characterized, the array data processed to determine the center of the intensity distribution, and expected communications performance of the proposed polished panel optical receiver has been evaluated.

  12. A Deep XMM-Newton Survey of M33: Point-source Catalog, Source Detection, and Characterization of Overlapping Fields

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Wold, Brian; Haberl, Frank; Garofali, Kristen; Blair, William P.; Gaetz, Terrance J.; Kuntz, K. D.; Long, Knox S.; Pannuti, Thomas G.; Pietsch, Wolfgang; Plucinsky, Paul P.; Winkler, P. Frank

    2015-05-01

    We have obtained a deep 8 field XMM-Newton mosaic of M33 covering the galaxy out to the D25 isophote and beyond to a limiting 0.2-4.5 keV unabsorbed flux of 5 × 10-16 erg cm-2 s-1 (L \\gt 4 × 1034 erg s-1 at the distance of M33). These data allow complete coverage of the galaxy with high sensitivity to soft sources such as diffuse hot gas and supernova remnants (SNRs). Here, we describe the methods we used to identify and characterize 1296 point sources in the 8 fields. We compare our resulting source catalog to the literature, note variable sources, construct hardness ratios, classify soft sources, analyze the source density profile, and measure the X-ray luminosity function (XLF). As a result of the large effective area of XMM-Newton below 1 keV, the survey contains many new soft X-ray sources. The radial source density profile and XLF for the sources suggest that only ˜15% of the 391 bright sources with L \\gt 3.6 × 1035 erg s-1 are likely to be associated with M33, and more than a third of these are known SNRs. The log(N)-log(S) distribution, when corrected for background contamination, is a relatively flat power law with a differential index of 1.5, which suggests that many of the other M33 sources may be high-mass X-ray binaries. Finally, we note the discovery of an interesting new transient X-ray source, which we are unable to classify.

  13. New concept for in-line OLED manufacturing

    NASA Astrophysics Data System (ADS)

    Hoffmann, U.; Landgraf, H.; Campo, M.; Keller, S.; Koening, M.

    2011-03-01

    A new concept of a vertical In-Line deposition machine for large area white OLED production has been developed. The concept targets manufacturing on large substrates (>= Gen 4, 750 x 920 mm2) using linear deposition source achieving a total material utilization of >= 50 % and tact time down to 80 seconds. The continuously improved linear evaporation sources for the organic material achieve thickness uniformity on Gen 4 substrate of better than +/- 3 % and stable deposition rates down to less than 0.1 nm m/min and up to more than 100 nm m/min. For Lithium-Fluoride but also for other high evaporation temperature materials like Magnesium or Silver a linear source with uniformity better than +/- 3 % has been developed. For Aluminum we integrated a vertical oriented point source using wire feed to achieve high (> 150 nm m/min) and stable deposition rates. The machine concept includes a new vertical vacuum handling and alignment system for Gen 4 shadow masks. A complete alignment cycle for the mask can be done in less than one minute achieving alignment accuracy in the range of several 10 μm.

  14. Preview of the BATSE Earth Occultation Catalog of Low Energy Gamma Ray Sources

    NASA Technical Reports Server (NTRS)

    Harmon, B. A.; Wilson, C. A.; Fishman, G. J.; McCollough, M. L.; Robinson, C. R.; Sahi, M.; Paciesas, W. S.; Zhang, S. N.

    1999-01-01

    The Burst and Transient Source Experiment (BATSE) aboard the Compton Gamma Ray Observatory (CGRO) has been detecting and monitoring point sources in the high energy sky since 1991. Although BATSE is best known for gamma ray bursts, it also monitors the sky for longer-lived sources of radiation. Using the Earth occultation technique to extract flux information, a catalog is being prepared of about 150 sources potential emission in the large area detectors (20-1000 keV). The catalog will contain light curves, representative spectra, and parametric data for black hole and neutron star binaries, active galaxies, and super-nova remnants. In this preview, we present light curves for persistent and transient sources, and also show examples of what type of information can be obtained from the BATSE Earth occultation database. Options for making the data easily accessible as an "on line" WWW document are being explored.

  15. Occurrence, spatial distribution, and ecological risks of typical hydroxylated polybrominated diphenyl ethers in surface sediments from a large freshwater lake of China.

    PubMed

    Liu, Dan; Wu, Sheng-Min; Zhang, Qin; Guo, Min; Cheng, Jie; Zhang, Sheng-Hu; Yao, Cheng; Chen, Jian-Qiu

    2017-02-01

    Hydroxylated polybrominated diphenyl ethers (OH-PBDEs) have been frequently observed in marine aquatic environments; however, little information is available on the occurrence of these compounds in freshwater aquatic environments, including freshwater lakes. In this study, we investigated the occurrence and spatial distribution of typical OH-PBDEs, including 2'-OH-BDE-68, 3-OH-BDE-47, 5-OH-BDE-47, and 6-OH-BDE-47 in surface sediments of Taihu Lake. 3-OH-BDE-47 was the predominant congener, followed by 5-OH-BDE-47, 2'-OH-BDE-68, and 6-OH-BDE-47. Distributions of these compounds are drastically different between sampling site which may be a result of differences in nearby point sources, such as the discharge of industrial wastewater and e-waste leachate. The positive correlation between ∑OH-PBDEs and total organic carbon (TOC) was moderate (r = 0.485, p < 0.05), and site S3 and S15 were excluded due to point source pollution, suggesting that OH-PBDEs concentrations were controlled by sediment TOC content, as well as other factors. The pairwise correlations between the concentrations of these compounds suggest that these compounds may have similar input sources and environmental behavior. The target compounds in the sediments of Lake Taihu pose low risks to aquatic organisms. Results show that OH-PBDEs in Lake Taihu are largely dependent on pollution sources. Because of bioaccumulation and subsequent harmful effects on aquatic organisms, the concentrations of OH-PBDEs in freshwater ecosystems are of environmental concern.

  16. Managing commercial and light-industrial discharges to POTWs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, R.G.

    1993-02-01

    Discharging commercial and light-industrial wastewater to a publicly owned treatment works (POTW) is risky business. Pretreating wastewater using traditional methods may leave a wastestream's originator vulnerable to fines, civil and criminal punishment, cleanup costs, and cease-and-desist orders. EPA has tightened regulations applying to discharges from POTWs, which, in turn, are looking to industrial and commercial discharge sources to determine responsibility for toxic contaminants. Although EPA in the past focused on large point sources of contamination, the Agency has shifted its emphasis to smaller and more diverse nonpoint sources. One result is that POTWs no longer act as buffers for light-industrialmore » and commercial wastewater dischargers.« less

  17. Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing

    2017-12-01

    Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.

  18. The FIRST Survey: Faint Images of the Radio Sky at Twenty Centimeters

    NASA Astrophysics Data System (ADS)

    Becker, Robert H.; White, Richard L.; Helfand, David J.

    1995-09-01

    The FIRST survey to produce Faint Images of the Radio Sky at Twenty centimeters is now underway using the NRAO Very Large Array. We describe here the scientific motivation for a large-area sky survey at radio frequencies which has a sensitivity and angular resolution comparable to the Palomar Observatory Sky Survey, and we recount the history that led to the current survey project. The technical design of the survey is covered in detail, including a description and justification of the grid pattern chosen, the rationale behind the integration time and angular resolution selected, and a summary of the other considerations which informed our planning for the project. A comprehensive description of the automated data analysis pipeline we have developed is presented. We also report here the results of the first year of FIRST observations. A total of 144 hr of time in 1993 April and May was used for a variety of tests, as well as to cover an initial strip of the survey extending between 07h 15m and 16h 30m in a 2°.8 wide declination zone passing through the local zenith (28.2 <δ < 31.0). A total of 2153 individual pointings yielded an image database containing 1039 merged images 46'.5 × 34'.5 in extent with 1".8 pixels and a typical rms of 0.13 mJy. A catalog derived from this 300 deg2 region contains 28,000 radio sources. We have performed extensive tests on the images and source list in order to establish the photometric and astrometric accuracy of these data products. We find systematic astrometric errors of < 0".05 individual sources down to the 1 mJy survey flux density threshold have 90% confidence error circles with radii of < 1". CLEAN bias introduces a systematic underestimate of point-source flux densities of ˜0.25 mJy; the bias is more severe for extended sources. Nonetheless, a comparison with a published deep survey field demonstrates that we successfully detect 39/49 sources with integrated flux densities greater than 0.75 mJy, including 19 of 20 sources above 2.0 mJy; the sources not detected are known to be very extended and so have surface brightnesses well below our threshold. With 480 hr of observing time committed for each of the next three B-configuration periods, FIRST will complete nearly one-half of its goal of covering the 10,000 deg2 of the north Galactic cap scheduled for inclusion in the Sloan Digital Sky Survey. All of the FIRST data raw visibilities, self-calibrated UV data sets, individual pointing maps, final merged images, source catalogs, and individual source images are being placed in the public domain as soon as they are verified; all of the 1993 data are now available through the NRAO and/or the STScI archive. We conclude with a brief summary of the scientific significance of FIRST, which represents an improvement by a factor of 50 in both angular resolution and sensitivity over the best available large area radio surveys.

  19. An analytically soluble problem in fully nonlinear statistical gravitational lensing

    NASA Technical Reports Server (NTRS)

    Schneider, P.

    1987-01-01

    The amplification probability distribution p(I)dI for a point source behind a random star field which acts as the deflector exhibits a I exp-3 behavior for large amplification, as can be shown from the universality of the lens equation near critical lines. In this paper it is shown that the amplitude of the I exp-3 tail can be derived exactly for arbitrary mass distribution of the stars, surface mass density of stars and smoothly distributed matter, and large-scale shear. This is then compared with the corresponding linear result.

  20. A programmable metasurface with dynamic polarization, scattering and focusing control

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  1. A programmable metasurface with dynamic polarization, scattering and focusing control

    PubMed Central

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-01-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997

  2. A programmable metasurface with dynamic polarization, scattering and focusing control.

    PubMed

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-24

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  3. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  4. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  5. Combinational concentration gradient confinement through stagnation flow.

    PubMed

    Alicia, Toh G G; Yang, Chun; Wang, Zhiping; Nguyen, Nam-Trung

    2016-01-21

    Concentration gradient generation in microfluidics is typically constrained by two conflicting mass transport requirements: short characteristic times (τ) for precise temporal control of concentration gradients but at the expense of high flow rates and hence, high flow shear stresses (σ). To decouple the limitations from these parameters, here we propose the use of stagnation flows to confine concentration gradients within large velocity gradients that surround the stagnation point. We developed a modified cross-slot (MCS) device capable of feeding binary and combinational concentration sources in stagnation flows. We show that across the velocity well, source-sink pairs can form permanent concentration gradients. As source-sink concentration pairs are continuously supplied to the MCS, a permanently stable concentration gradient can be generated. Tuning the flow rates directly controls the velocity gradients, and hence the stagnation point location, allowing the confined concentration gradient to be focused. In addition, the flow rate ratio within the MCS rapidly controls (τ ∼ 50 ms) the location of the stagnation point and the confined combinational concentration gradients at low flow shear (0.2 Pa < σ < 2.9 Pa). The MCS device described in this study establishes the method for using stagnation flows to rapidly generate and position low shear combinational concentration gradients for shear sensitive biological assays.

  6. Assessment of ground-water contamination in the alluvial aquifer near West Point, Kentucky

    USGS Publications Warehouse

    Lyverse, M.A.; Unthank, M.D.

    1988-01-01

    Well inventories, water level measurements, groundwater quality samples, surface geophysical techniques (specifically, electromagnetic techniques), and test drilling were used to investigate the extent and sources of groundwater contamination in the alluvial aquifer near West Point, Kentucky. This aquifer serves as the principal source of drinking water for over 50,000 people. Groundwater flow in the alluvial aquifer is generally unconfined and moves in a northerly direction toward the Ohio River. Two large public supply well fields and numerous domestic wells are located in this natural flow path. High concentrations of chloride in groundwater have resulted in the abandonment of several public supply wells in the West Point areas. Chloride concentrations in water samples collected for this study were as high as 11,000 mg/L. Electromagnetic techniques indicated and test drilling later confirmed that the source of chloride in well waters was probably improperly plugged or unplugged, abandoned oil and gas exploration wells. The potential for chloride contamination of wells exists in the study area and is related to proximity to improperly abandoned oil and gas exploration wells and to gradients established by drawdowns associated with pumped wells. Periodic use of surface geophysical methods, in combination with added observation wells , could be used to monitor significant changes in groundwater quality related to chloride contamination. (USGS)

  7. Application of an integrated Weather Research and Forecasting (WRF)/CALPUFF modeling tool for source apportionment of atmospheric pollutants for air quality management: A case study in the urban area of Benxi, China.

    PubMed

    Wu, Hao; Zhang, Yan; Yu, Qi; Ma, Weichun

    2018-04-01

    In this study, the authors endeavored to develop an effective framework for improving local urban air quality on meso-micro scales in cities in China that are experiencing rapid urbanization. Within this framework, the integrated Weather Research and Forecasting (WRF)/CALPUFF modeling system was applied to simulate the concentration distributions of typical pollutants (particulate matter with an aerodynamic diameter <10 μm [PM 10 ], sulfur dioxide [SO 2 ], and nitrogen oxides [NO x ]) in the urban area of Benxi. Statistical analyses were performed to verify the credibility of this simulation, including the meteorological fields and concentration fields. The sources were then categorized using two different classification methods (the district-based and type-based methods), and the contributions to the pollutant concentrations from each source category were computed to provide a basis for appropriate control measures. The statistical indexes showed that CALMET had sufficient ability to predict the meteorological conditions, such as the wind fields and temperatures, which provided meteorological data for the subsequent CALPUFF run. The simulated concentrations from CALPUFF showed considerable agreement with the observed values but were generally underestimated. The spatial-temporal concentration pattern revealed that the maximum concentrations tended to appear in the urban centers and during the winter. In terms of their contributions to pollutant concentrations, the districts of Xihu, Pingshan, and Mingshan all affected the urban air quality to different degrees. According to the type-based classification, which categorized the pollution sources as belonging to the Bengang Group, large point sources, small point sources, and area sources, the source apportionment showed that the Bengang Group, the large point sources, and the area sources had considerable impacts on urban air quality. Finally, combined with the industrial characteristics, detailed control measures were proposed with which local policy makers could improve the urban air quality in Benxi. In summary, the results of this study showed that this framework has credibility for effectively improving urban air quality, based on the source apportionment of atmospheric pollutants. The authors endeavored to build up an effective framework based on the integrated WRF/CALPUFF to improve the air quality in many cities on meso-micro scales in China. Via this framework, the integrated modeling tool is accurately used to study the characteristics of meteorological fields, concentration fields, and source apportionments of pollutants in target area. The impacts of classified sources on air quality together with the industrial characteristics can provide more effective control measures for improving air quality. Through the case study, the technical framework developed in this study, particularly the source apportionment, could provide important data and technical support for policy makers to assess air pollution on the scale of a city in China or even the world.

  8. Herschel Key Program Heritage: a Far-Infrared Source Catalog for the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta; Babler, Brian; Engelbracht, Charles W.; Gordon, Karl; Hony, Sacha; Misselt, Karl; Montiel, Edward; Okumura, Koryo; Panuzzo, Pasquale; Roman-Duval, Julia; Sauvage, Marc; Boyer, Martha L.; Chen, C.-H. Rosie; Indebetouw, Remy; Matsuura, Mikako; Oliveira, Joana M.; Srinivasan, Sundar; van Loon, Jacco Th.; Whitney, Barbara; Woods, Paul M.

    2014-12-01

    Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from the Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.

  9. Open-source point-of-care electronic medical records for use in resource-limited settings: systematic review and questionnaire surveys

    PubMed Central

    Bru, Juan; Berger, Christopher A

    2012-01-01

    Background Point-of-care electronic medical records (EMRs) are a key tool to manage chronic illness. Several EMRs have been developed for use in treating HIV and tuberculosis, but their applicability to primary care, technical requirements and clinical functionalities are largely unknown. Objectives This study aimed to address the needs of clinicians from resource-limited settings without reliable internet access who are considering adopting an open-source EMR. Study eligibility criteria Open-source point-of-care EMRs suitable for use in areas without reliable internet access. Study appraisal and synthesis methods The authors conducted a comprehensive search of all open-source EMRs suitable for sites without reliable internet access. The authors surveyed clinician users and technical implementers from a single site and technical developers of each software product. The authors evaluated availability, cost and technical requirements. Results The hardware and software for all six systems is easily available, but they vary considerably in proprietary components, installation requirements and customisability. Limitations This study relied solely on self-report from informants who developed and who actively use the included products. Conclusions and implications of key findings Clinical functionalities vary greatly among the systems, and none of the systems yet meet minimum requirements for effective implementation in a primary care resource-limited setting. The safe prescribing of medications is a particular concern with current tools. The dearth of fully functional EMR systems indicates a need for a greater emphasis by global funding agencies to move beyond disease-specific EMR systems and develop a universal open-source health informatics platform. PMID:22763661

  10. High Attenuation Rate for Shallow, Small Earthquakes in Japan

    NASA Astrophysics Data System (ADS)

    Si, Hongjun; Koketsu, Kazuki; Miyake, Hiroe

    2017-09-01

    We compared the attenuation characteristics of peak ground accelerations (PGAs) and velocities (PGVs) of strong motion from shallow, small earthquakes that occurred in Japan with those predicted by the equations of Si and Midorikawa (J Struct Constr Eng 523:63-70, 1999). The observed PGAs and PGVs at stations far from the seismic source decayed more rapidly than the predicted ones. The same tendencies have been reported for deep, moderate, and large earthquakes, but not for shallow, moderate, and large earthquakes. This indicates that the peak values of ground motion from shallow, small earthquakes attenuate more steeply than those from shallow, moderate or large earthquakes. To investigate the reason for this difference, we numerically simulated strong ground motion for point sources of M w 4 and 6 earthquakes using a 2D finite difference method. The analyses of the synthetic waveforms suggested that the above differences are caused by surface waves, which are predominant at stations far from the seismic source for shallow, moderate earthquakes but not for shallow, small earthquakes. Thus, although loss due to reflection at the boundaries of the discontinuous Earth structure occurs in all shallow earthquakes, the apparent attenuation rate for a moderate or large earthquake is essentially the same as that of body waves propagating in a homogeneous medium due to the dominance of surface waves.

  11. Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study

    PubMed Central

    Pfeffer, Fabian T.; Griffin, Jamie

    2017-01-01

    Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752

  12. NON-POINT SOURCE POLLUTION

    EPA Science Inventory

    Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...

  13. Gibbon travel paths are goal oriented.

    PubMed

    Asensio, Norberto; Brockelman, Warren Y; Malaivijitnond, Suchinda; Reichard, Ulrich H

    2011-05-01

    Remembering locations of food resources is critical for animal survival. Gibbons are territorial primates which regularly travel through small and stable home ranges in search of preferred, limited and patchily distributed resources (primarily ripe fruit). They are predicted to profit from an ability to memorize the spatial characteristics of their home range and may increase their foraging efficiency by using a 'cognitive map' either with Euclidean or with topological properties. We collected ranging and feeding data from 11 gibbon groups (Hylobates lar) to test their navigation skills and to better understand gibbons' 'spatial intelligence'. We calculated the locations at which significant travel direction changes occurred using the change-point direction test and found that these locations primarily coincided with preferred fruit sources. Within the limits of biologically realistic visibility distances observed, gibbon travel paths were more efficient in detecting known preferred food sources than a heuristic travel model based on straight travel paths in random directions. Because consecutive travel change-points were far from the gibbons' sight, planned movement between preferred food sources was the most parsimonious explanation for the observed travel patterns. Gibbon travel appears to connect preferred food sources as expected under the assumption of a good mental representation of the most relevant sources in a large-scale space.

  14. Multiscale Spatial Modeling of Human Exposure from Local Sources to Global Intake.

    PubMed

    Wannaz, Cedric; Fantke, Peter; Jolliet, Olivier

    2018-01-16

    Exposure studies, used in human health risk and impact assessments of chemicals, are largely performed locally or regionally. It is usually not known how global impacts resulting from exposure to point source emissions compare to local impacts. To address this problem, we introduce Pangea, an innovative multiscale, spatial multimedia fate and exposure assessment model. We study local to global population exposure associated with emissions from 126 point sources matching locations of waste-to-energy plants across France. Results for three chemicals with distinct physicochemical properties are expressed as the evolution of the population intake fraction through inhalation and ingestion as a function of the distance from sources. For substances with atmospheric half-lives longer than a week, less than 20% of the global population intake through inhalation (median of 126 emission scenarios) can occur within a 100 km radius from the source. This suggests that, by neglecting distant low-level exposure, local assessments might only account for fractions of global cumulative intakes. We also study ∼10 000 emission locations covering France more densely to determine per chemical and exposure route which locations minimize global intakes. Maps of global intake fractions associated with each emission location show clear patterns associated with population and agriculture production densities.

  15. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.

  16. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  17. Use of electronic healthcare records in large-scale simple randomized trials at the point of care for the documentation of value-based medicine.

    PubMed

    van Staa, T-P; Klungel, O; Smeeth, L

    2014-06-01

    A solid foundation of evidence of the effects of an intervention is a prerequisite of evidence-based medicine. The best source of such evidence is considered to be randomized trials, which are able to avoid confounding. However, they may not always estimate effectiveness in clinical practice. Databases that collate anonymized electronic health records (EHRs) from different clinical centres have been widely used for many years in observational studies. Randomized point-of-care trials have been initiated recently to recruit and follow patients using the data from EHR databases. In this review, we describe how EHR databases can be used for conducting large-scale simple trials and discuss the advantages and disadvantages of their use. © 2014 The Association for the Publication of the Journal of Internal Medicine.

  18. Research on starlight hardware-in-the-loop simulator

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Gao, Yang; Qu, Huiyang; Liu, Dongfang; Du, Huijie; Lei, Jie

    2016-10-01

    The starlight navigation is considered to be one of the most important methods for spacecraft navigation. Starlight simulation system is a high-precision system with large fields of view, designed to test the starlight navigation sensor performance on the ground. A complete hardware-in-the-loop simulation of the system has been built. The starlight simulator is made up of light source, light source controller, light filter, LCD, collimator and control computer. LCD is the key display component of the system, and is installed at the focal point of the collimator. For the LCD cannot emit light itself, so light source and light source power controller is specially designed for the brightness demanded by the LCD. Light filter is designed for the dark background which is also needed in the simulation.

  19. Anthropogenic Methane Emissions in California's San Joaquin Valley: Characterizing Large Point Source Emitters

    NASA Astrophysics Data System (ADS)

    Hopkins, F. M.; Duren, R. M.; Miller, C. E.; Aubrey, A. D.; Falk, M.; Holland, L.; Hook, S. J.; Hulley, G. C.; Johnson, W. R.; Kuai, L.; Kuwayama, T.; Lin, J. C.; Thorpe, A. K.; Worden, J. R.; Lauvaux, T.; Jeong, S.; Fischer, M. L.

    2015-12-01

    Methane is an important atmospheric pollutant that contributes to global warming and tropospheric ozone production. Methane mitigation could reduce near term climate change and improve air quality, but is hindered by a lack of knowledge of anthropogenic methane sources. Recent work has shown that methane emissions are not evenly distributed in space, or across emission sources, suggesting that a large fraction of anthropogenic methane comes from a few "super-emitters." We studied the distribution of super-emitters in California's southern San Joaquin Valley, where elevated levels of atmospheric CH4 have also been observed from space. Here, we define super-emitters as methane plumes that could be reliably detected (i.e., plume observed more than once in the same location) under varying wind conditions by airborne thermal infrared remote sensing. The detection limit for this technique was determined to be 4.5 kg CH4 h-1 by a controlled release experiment, corresponding to column methane enhancement at the point of emissions greater than 20% above local background levels. We surveyed a major oil production field, and an area with a high concentration of large dairies using a variety of airborne and ground-based measurements. Repeated airborne surveys (n=4) with the Hyperspectral Thermal Emission Spectrometer revealed 28 persistent methane plumes emanating from oil field infrastructure, including tanks, wells, and processing facilities. The likelihood that a given source type was a super-emitter varied from roughly 1/3 for processing facilities to 1/3000 for oil wells. 11 persistent plumes were detected in the dairy area, and all were associated with wet manure management. The majority (11/14) of manure lagoons in the study area were super-emitters. Comparing to a California methane emissions inventory for the surveyed areas, we estimate that super-emitters comprise a minimum of 9% of inventoried dairy emissions, and 13% of inventoried oil emissions in this region.

  20. Improving the sensitivity of gamma-ray telescopes to dark matter annihilation in dwarf spheroidal galaxies

    DOE PAGES

    Carlson, Eric; Hooper, Dan; Linden, Tim

    2015-03-01

    The Fermi-LAT Collaboration has studied the gamma-ray emission from a stacked population of dwarf spheroidal galaxies and used this information to set constraints on the dark matter annihilation cross section. Interestingly, their analysis uncovered an excess with a test statistic (TS) of 8.7. If interpreted naively, this constitutes a 2.95σ local excess (p-value=0.003), relative to the expectations of their background model. In order to further test this interpretation, the Fermi-LAT team studied a large number of blank sky locations and found TS>8.7 excesses to be more common than predicted by their background model, decreasing the significance of their dwarf excessmore » to 2.2σ(p-value=0.027). We argue that these TS>8.7 blank sky locations are largely the result of unresolved blazars, radio galaxies, and star-forming galaxies, and show that multiwavelength information can be used to reduce the degree to which such sources contaminate the otherwise blank sky. In particular, we show that masking regions of the sky that lie within 1° of sources contained in the BZCAT or CRATES catalogs reduce the fraction of blank sky locations with TS>8.7 by more than a factor of 2. Taking such multiwavelength information into account can enable experiments such as Fermi to better characterize their backgrounds and increase their sensitivity to dark matter in dwarf galaxies, the most important of which remain largely uncontaminated by unresolved point sources. We also note that for the range of dark matter masses and annihilation cross sections currently being tested by studies of dwarf spheroidal galaxies, simulations predict that Fermi should be able to detect a significant number of dark matter subhalos. These subhalos constitute a population of subthreshold gamma-ray point sources and represent an irreducible background for searches for dark matter annihilation in dwarf galaxies.« less

  1. Eruptive Source Parameters from Near-Source Gravity Waves Induced by Large Vulcanian eruptions

    NASA Astrophysics Data System (ADS)

    Barfucci, Giulia; Ripepe, Maurizio; De Angelis, Silvio; Lacanna, Giorgio; Marchetti, Emanuele

    2016-04-01

    The sudden ejection of hot material from volcanic vent perturbs the atmosphere generating a broad spectrum of pressure oscillations from acoustic infrasound (<10 Hz) to gravity waves (<0.03 Hz). However observations of gravity waves excited by volcanic eruptions are still rare, mostly limited to large sub-plinian eruptions and frequently at large distance from the source (>100 km). Atmospheric Gravity waves are induced by perturbations of the hydrostatic equilibrium of the atmosphere and propagate within a medium with internal density stratification. They are initiated by mechanisms that cause the atmosphere to be displaced as for the injection of volcanic ash plume during an eruption. We use gravity waves to infer eruptive source parameters, such as mass eruption rate (MER) and duration of the eruption, which may be used as inputs in the volcanic ash transport and dispersion models. We present the analysis of near-field observations (<7 km) of atmospheric gravity waves, with frequencies of 0.97 and 1.15 mHz, recorded by a pressure sensors network during two explosions in July and December 2008 at Soufrière Hills Volcano, Montserrat. We show that gravity waves at Soufrière Hills Volcano originate above the volcanic dome and propagate with an apparent horizontal velocities of 8-10 m/s. Assuming a single mass injection point source model, we constrain the source location at ~3.5 km a.s.l., above the vent, duration of the gas thrust < 140 s and MERs of 2.6 and 5.4 x10E7 kg/s, for the two eruptive events. Source duration and MER derived by modeling Gravity Waves are fully compatible with others independent estimates from field observations. Our work strongly supports the use of gravity waves to model eruption source parameters and can have a strong impact on our ability to monitor volcanic eruption at a large distance and may have future application in assessing the relative magnitude of volcanic explosions.

  2. Matching radio catalogues with realistic geometry: application to SWIRE and ATLAS

    NASA Astrophysics Data System (ADS)

    Fan, Dongwei; Budavári, Tamás; Norris, Ray P.; Hopkins, Andrew M.

    2015-08-01

    Cross-matching catalogues at different wavelengths is a difficult problem in astronomy, especially when the objects are not point-like. At radio wavelengths, an object can have several components corresponding, for example, to a core and lobes. Considering not all radio detections correspond to visible or infrared sources, matching these catalogues can be challenging. Traditionally, this is done by eye for better quality, which does not scale to the large data volumes expected from the next-generation of radio telescopes. We present a novel automated procedure, using Bayesian hypothesis testing, to achieve reliable associations by explicit modelling of a particular class of radio-source morphology. The new algorithm not only assesses the likelihood of an association between data at two different wavelengths, but also tries to assess whether different radio sources are physically associated, are double-lobed radio galaxies, or just distinct nearby objects. Application to the Spitzer Wide-Area Infrared Extragalactic and Australia Telescope Large Area Survey CDF-S catalogues shows that this method performs well without human intervention.

  3. Large Subduction Earthquake Simulations using Finite Source Modeling and the Offshore-Onshore Ambient Seismic Field

    NASA Astrophysics Data System (ADS)

    Viens, L.; Miyake, H.; Koketsu, K.

    2016-12-01

    Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.

  4. Reliability and longitudinal change of detrital-zircon age spectra in the Snake River system, Idaho and Wyoming: An example of reproducing the bumpy barcode

    NASA Astrophysics Data System (ADS)

    Link, Paul Karl; Fanning, C. Mark; Beranek, Luke P.

    2005-12-01

    Detrital-zircon age-spectra effectively define provenance in Holocene and Neogene fluvial sands from the Snake River system of the northern Rockies, U.S.A. SHRIMP U-Pb dates have been measured for forty-six samples (about 2700 zircon grains) of fluvial and aeolian sediment. The detrital-zircon age distributions are repeatable and demonstrate predictable longitudinal variation. By lumping multiple samples to attain populations of several hundred grains, we recognize distinctive, provenance-defining zircon-age distributions or "barcodes," for fluvial sedimentary systems of several scales, within the upper and middle Snake River system. Our detrital-zircon studies effectively define the geochronology of the northern Rocky Mountains. The composite detrital-zircon grain distribution of the middle Snake River consists of major populations of Neogene, Eocene, and Cretaceous magmatic grains plus intermediate and small grain populations of multiply recycled Grenville (˜950 to 1300 Ma) grains and Yavapai-Mazatzal province grains (˜1600 to 1800 Ma) recycled through the upper Belt Supergroup and Cretaceous sandstones. A wide range of older Paleoproterozoic and Archean grains are also present. The best-case scenario for using detrital-zircon populations to isolate provenance is when there is a point-source pluton with known age, that is only found in one location or drainage. We find three such zircon age-populations in fluvial sediments downstream from the point-source plutons: Ordovician in the southern Beaverhead Mountains, Jurassic in northern Nevada, and Oligocene in the Albion Mountains core complex of southern Idaho. Large detrital-zircon age-populations derived from regionally well-defined, magmatic or recycled sedimentary, sources also serve to delimit the provenance of Neogene fluvial systems. In the Snake River system, defining populations include those derived from Cretaceous Atlanta lobe of the Idaho batholith (80 to 100 Ma), Eocene Challis Volcanic Group and associated plutons (˜45 to 52 Ma), and Neogene rhyolitic Yellowstone-Snake River Plain volcanics (˜0 to 17 Ma). For first-order drainage basins containing these zircon-rich source terranes, or containing a point-source pluton, a 60-grain random sample is sufficient to define the dominant provenance. The most difficult age-distributions to analyze are those that contain multiple small zircon age-populations and no defining large populations. Examples of these include streams draining the Proterozoic and Paleozoic Cordilleran miogeocline in eastern Idaho and Pleistocene loess on the Snake River Plain. For such systems, large sample bases of hundreds of grains, plus the use of statistical methods, may be necessary to distinguish detrital-zircon age-spectra.

  5. Does the finite size of the proto-neutron star preclude supernova neutrino flavor scintillation due to turbulence?

    DOE PAGES

    Kneller, James P.; Mauney, Alex W.

    2013-08-23

    Here, the transition probabilities describing the evolution of a neutrino with a given energy along some ray through a turbulent supernova profile are random variates unique to each ray. If the proto-neutron-star source of the neutrinos were a point, then one might expect the evolution of the turbulence would cause the flavor composition of the neutrinos to vary in time i.e. the flavor would scintillate. But in reality the proto-neutron star is not a point source—it has a size of order ˜10km, so the neutrinos emitted from different points at the source will each have seen different turbulence. The finitemore » source size will reduce the correlation of the flavor transition probabilities along different trajectories and reduce the magnitude of the flavor scintillation. To determine whether the finite size of the proto-neutron star will preclude flavor scintillation, we calculate the correlation of the neutrino flavor transition probabilities through turbulent supernova profiles as a function of the separation δx between the emission points. The correlation will depend upon the power spectrum used for the turbulence, and we consider two cases: when the power spectrum is isotropic, and the more realistic case of a power spectrum which is anisotropic on large scales and isotropic on small. Although it is dependent on a number of uncalibrated parameters, we show the supernova neutrino source is not of sufficient size to significantly blur flavor scintillation in all mixing channels when using an isotropic spectrum, and this same result holds when using an anisotropic spectrum, except when we greatly reduce the similarity of the turbulence along parallel trajectories separated by ˜10km or less.« less

  6. Developing a Near Real-time System for Earthquake Slip Distribution Inversion

    NASA Astrophysics Data System (ADS)

    Zhao, Li; Hsieh, Ming-Che; Luo, Yan; Ji, Chen

    2016-04-01

    Advances in observational and computational seismology in the past two decades have enabled completely automatic and real-time determinations of the focal mechanisms of earthquake point sources. However, seismic radiations from moderate and large earthquakes often exhibit strong finite-source directivity effect, which is critically important for accurate ground motion estimations and earthquake damage assessments. Therefore, an effective procedure to determine earthquake rupture processes in near real-time is in high demand for hazard mitigation and risk assessment purposes. In this study, we develop an efficient waveform inversion approach for the purpose of solving for finite-fault models in 3D structure. Full slip distribution inversions are carried out based on the identified fault planes in the point-source solutions. To ensure efficiency in calculating 3D synthetics during slip distribution inversions, a database of strain Green tensors (SGT) is established for 3D structural model with realistic surface topography. The SGT database enables rapid calculations of accurate synthetic seismograms for waveform inversion on a regular desktop or even a laptop PC. We demonstrate our source inversion approach using two moderate earthquakes (Mw~6.0) in Taiwan and in mainland China. Our results show that 3D velocity model provides better waveform fitting with more spatially concentrated slip distributions. Our source inversion technique based on the SGT database is effective for semi-automatic, near real-time determinations of finite-source solutions for seismic hazard mitigation purposes.

  7. Sources and transport of phosphorus to rivers in California and adjacent states, U.S., as determined by SPARROW modeling

    USGS Publications Warehouse

    Domagalski, Joseph L.; Saleh, Dina

    2015-01-01

    The SPARROW (SPAtially Referenced Regression on Watershed attributes) model was used to simulate annual phosphorus loads and concentrations in unmonitored stream reaches in California, U.S., and portions of Nevada and Oregon. The model was calibrated using de-trended streamflow and phosphorus concentration data at 80 locations. The model explained 91% of the variability in loads and 51% of the variability in yields for a base year of 2002. Point sources, geological background, and cultivated land were significant sources. Variables used to explain delivery of phosphorus from land to water were precipitation and soil clay content. Aquatic loss of phosphorus was significant in streams of all sizes, with the greatest decay predicted in small- and intermediate-sized streams. Geological sources, including volcanic rocks and shales, were the principal control on concentrations and loads in many regions. Some localized formations such as the Monterey shale of southern California are important sources of phosphorus and may contribute to elevated stream concentrations. Many of the larger point source facilities were located in downstream areas, near the ocean, and do not affect inland streams except for a few locations. Large areas of cultivated land result in phosphorus load increases, but do not necessarily increase the loads above those of geological background in some cases because of local hydrology, which limits the potential of phosphorus transport from land to streams.

  8. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  9. Unveiling the Gamma-Ray Source Count Distribution Below the Fermi Detection Limit with Photon Statistics

    DOE PAGES

    Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...

    2016-07-26

    The source-count distribution as a function of their flux, dN/dS, is one of the main quantities characterizing gamma-ray source populations. In this paper, we employ statistical properties of the Fermi Large Area Telescope (LAT) photon counts map to measure the composition of the extragalactic gamma-ray sky at high latitudes (|b| greater-than or slanted equal to 30°) between 1 and 10 GeV. We present a new method, generalizing the use of standard pixel-count statistics, to decompose the total observed gamma-ray emission into (a) point-source contributions, (b) the Galactic foreground contribution, and (c) a truly diffuse isotropic background contribution. Using the 6more » yr Fermi-LAT data set (P7REP), we show that the dN/dS distribution in the regime of so far undetected point sources can be consistently described with a power law with an index between 1.9 and 2.0. We measure dN/dS down to an integral flux of ~2 x 10 -11cm -2s -1, improving beyond the 3FGL catalog detection limit by about one order of magnitude. The overall dN/dS distribution is consistent with a broken power law, with a break at 2.1 +1.0 -1.3 x 10 -8cm -2s -1. The power-law index n 1 = 3.1 +0.7 -0.5 for bright sources above the break hardens to n 2 = 1.97 ± 0.03 for fainter sources below the break. A possible second break of the dN/dS distribution is constrained to be at fluxes below 6.4 x 10 -11cm -2s -1 at 95% confidence level. Finally, the high-latitude gamma-ray sky between 1 and 10 GeV is shown to be composed of ~25% point sources, ~69.3% diffuse Galactic foreground emission, and ~6% isotropic diffuse background.« less

  10. Isotopic Tracers for Delineating Non-Point Source Pollutants in Surface Water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davisson, M L

    2001-03-01

    This study tested whether isotope measurements of surface water and dissolved constituents in surface water could be used as tracers of non-point source pollution. Oxygen-18 was used as a water tracer, while carbon-14, carbon-13, and deuterium were tested as tracers of DOC. Carbon-14 and carbon-13 were also used as tracers of dissolved inorganic carbon, and chlorine-36 and uranium isotopes were tested as tracers of other dissolved salts. In addition, large databases of water quality measurements were assembled for the Missouri River at St. Louis and the Sacramento-San Joaquin Delta in California to enhance interpretive results of the isotope measurements. Muchmore » of the water quality data has been under-interpreted and provides a valuable resource to investigative research, for which this report exploits and integrates with the isotope measurements.« less

  11. Comparison of finite source and plane wave scattering from corrugated surfaces

    NASA Technical Reports Server (NTRS)

    Levine, D. M.

    1977-01-01

    The choice of a plane wave to represent incident radiation in the analysis of scatter from corrugated surfaces was examined. The physical optics solution obtained for the scattered fields due to an incident plane wave was compared with the solution obtained when the incident radiation is produced by a source of finite size and finite distance from the surface. The two solutions are equivalent if the observer is in the far field of the scatterer and the distance from observer to scatterer is large compared to the radius of curvature at the scatter points, condition not easily satisfied with extended scatterers such as rough surfaces. In general, the two solutions have essential differences such as in the location of the scatter points and the dependence of the scattered fields on the surface properties. The implication of these differences to the definition of a meaningful radar cross section was examined.

  12. Dune advance into a coastal forest, equatorial Brazil: A subsurface perspective

    NASA Astrophysics Data System (ADS)

    Buynevich, Ilya V.; Filho, Pedro Walfir M. Souza; Asp, Nils E.

    2010-06-01

    A large active parabolic dune along the coast of Pará State, northern Brazil, was analyzed using aerial photography and imaged with high-resolution ground-penetrating radar (GPR) to map the subsurface facies architecture and point-source anomalies. Most high-amplitude (8-10 dB) subsurface anomalies are correlated with partially buried mangrove trees along the leading edge (slipface) of the advancing dune. Profiles along a 200-m long basal stoss side of the dune reveal 66 targets, most of which lie below the water table and are thus inaccessible by other methods. Signal amplitudes of point-source anomalies are substantially higher than those associated with the reflections from continuous subsurface features (water table, sedimentary layers). When complemented with exposures and excavations, GPR provides the best means of rapid continuous imaging of the geological record of complex interactions between vegetation and aeolian deposition.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seale, Jonathan P.; Meixner, Margaret; Sewiło, Marta

    Observations from the HERschel Inventory of the Agents of Galaxy Evolution (HERITAGE) have been used to identify dusty populations of sources in the Large and Small Magellanic Clouds (LMC and SMC). We conducted the study using the HERITAGE catalogs of point sources available from the Herschel Science Center from both the Photodetector Array Camera and Spectrometer (PACS; 100 and 160 μm) and Spectral and Photometric Imaging Receiver (SPIRE; 250, 350, and 500 μm) cameras. These catalogs are matched to each other to create a Herschel band-merged catalog and then further matched to archival Spitzer IRAC and MIPS catalogs from themore » Spitzer Surveying the Agents of Galaxy Evolution (SAGE) and SAGE-SMC surveys to create single mid- to far-infrared (far-IR) point source catalogs that span the wavelength range from 3.6 to 500 μm. There are 35,322 unique sources in the LMC and 7503 in the SMC. To be bright in the FIR, a source must be very dusty, and so the sources in the HERITAGE catalogs represent the dustiest populations of sources. The brightest HERITAGE sources are dominated by young stellar objects (YSOs), and the dimmest by background galaxies. We identify the sources most likely to be background galaxies by first considering their morphology (distant galaxies are point-like at the resolution of Herschel) and then comparing the flux distribution to that of the Herschel Astrophysical Terahertz Large Area Survey (ATLAS) survey of galaxies. We find a total of 9745 background galaxy candidates in the LMC HERITAGE images and 5111 in the SMC images, in agreement with the number predicted by extrapolating from the ATLAS flux distribution. The majority of the Magellanic Cloud-residing sources are either very young, embedded forming stars or dusty clumps of the interstellar medium. Using the presence of 24 μm emission as a tracer of star formation, we identify 3518 YSO candidates in the LMC and 663 in the SMC. There are far fewer far-IR bright YSOs in the SMC than the LMC due to both the SMC's smaller size and its lower dust content. The YSO candidate lists may be contaminated at low flux levels by background galaxies, and so we differentiate between sources with a high (“probable”) and moderate (“possible”) likelihood of being a YSO. There are 2493/425 probable YSO candidates in the LMC/SMC. Approximately 73% of the Herschel YSO candidates are newly identified in the LMC, and 35% in the SMC. We further identify a small population of dusty objects in the late stages of stellar evolution including extreme and post-asymptotic giant branch, planetary nebulae, and supernova remnants. These populations are identified by matching the HERITAGE catalogs to lists of previously identified objects in the literature. Approximately half of the LMC sources and one quarter of the SMC sources are too faint to obtain accurate ample FIR photometry and are unclassified.« less

  14. Efficiency study of a big volume well type NaI(Tl) detector by point and voluminous sources and Monte-Carlo simulation.

    PubMed

    Hansman, Jan; Mrdja, Dusan; Slivka, Jaroslav; Krmar, Miodrag; Bikit, Istvan

    2015-05-01

    The activity of environmental samples is usually measured by high resolution HPGe gamma spectrometers. In this work a set-up with a 9in.x9in. NaI well-detector with 3in. thickness and a 3in.×3in. plug detector in a 15-cm-thick lead shielding is considered as an alternative (Hansman, 2014). In spite of its much poorer resolution, it requires shorter measurement times and may possibly give better detection limits. In order to determine the U-238, Th-232, and K-40 content in the samples by this NaI(Tl) detector, the corresponding photopeak efficiencies must be known. These efficiencies can be found for certain source matrix and geometry by Geant4 simulation. We found discrepancy between simulated and experimental efficiencies of 5-50%, which can be mainly due to effects of light collection within the detector volume, an effect which was not taken into account by simulations. The influence of random coincidence summing on detection efficiency for radionuclide activities in the range 130-4000Bq, was negligible. This paper describes also, how the efficiency in the detector depends on the position of the radioactive point source. To avoid large dead time, relatively weak Mn-54, Co-60 and Na-22 point sources of a few kBq were used. Results for single gamma lines and also for coincidence summing gamma lines are presented. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  16. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  17. Study on beam geometry and image reconstruction algorithm in fast neutron computerized tomography at NECTAR facility

    NASA Astrophysics Data System (ADS)

    Guo, J.; Bücherl, T.; Zou, Y.; Guo, Z.

    2011-09-01

    Investigations on the fast neutron beam geometry for the NECTAR facility are presented. The results of MCNP simulations and experimental measurements of the beam distributions at NECTAR are compared. Boltzmann functions are used to describe the beam profile in the detection plane assuming the area source to be set up of large number of single neutron point sources. An iterative algebraic reconstruction algorithm is developed, realized and verified by both simulated and measured projection data. The feasibility for improved reconstruction in fast neutron computerized tomography at the NECTAR facility is demonstrated.

  18. Solar dynamic power for Earth orbital and lunar applications

    NASA Technical Reports Server (NTRS)

    Calogeras, James E.; Dustin, Miles O.; Secunde, Richard R.

    1991-01-01

    Development of solar dynamic (SD) technologies for space over the past 25 years by NASA Lewis Research Center brought SD power to the point where it was selected in the design phase of Space Station Freedom Program as the power source for evolutionary growth. More recent studies showed that large cost savings are possible in establishing manufacturing processes at a Lunar Base if SD is considered as a power source. Technology efforts over the past 5 years have made possible lighter, more durable, SD components for these applications. A review of these efforts and respective benefits is presented.

  19. Investigating the generation of Love waves in secondary microseisms using 3D numerical simulations

    NASA Astrophysics Data System (ADS)

    Wenk, Stefan; Hadziioannou, Celine; Pelties, Christian; Igel, Heiner

    2014-05-01

    Longuet-Higgins (1950) proposed that secondary microseismic noise can be attributed to oceanic disturbances by surface gravity wave interference causing non-linear, second-order pressure perturbations at the ocean bottom. As a first approximation, this source mechanism can be considered as a force acting normal to the ocean bottom. In an isotropic, layered, elastic Earth model with plain interfaces, vertical forces generate P-SV motions in the vertical plane of source and receiver. In turn, only Rayleigh waves are excited at the free surface. However, several authors report on significant Love wave contributions in the secondary microseismic frequency band of real data measurements. The reason is still insufficiently analysed and several hypothesis are under debate: - The source mechanism has strongest influence on the excitation of shear motions, whereas the source direction dominates the effect of Love wave generation in case of point force sources. Darbyshire and Okeke (1969) proposed the topographic coupling effect of pressure loads acting on a sloping sea-floor to generate the shear tractions required for Love wave excitation. - Rayleigh waves can be converted into Love waves by scattering. Therefore, geometric scattering at topographic features or internal scattering by heterogeneous material distributions can cause Love wave generation. - Oceanic disturbances act on large regions of the ocean bottom, and extended sources have to be considered. In combination with topographic coupling and internal scattering, the extent of the source region and the timing of an extended source should effect Love wave excitation. We try to elaborate the contribution of different source mechanisms and scattering effects on Love to Rayleigh wave energy ratios by 3D numerical simulations. In particular, we estimate the amount of Love wave energy generated by point and extended sources acting on the free surface. Simulated point forces are modified in their incident angle, whereas extended sources are adapted in their spatial extent, magnitude and timing. Further, the effect of variations in the correlation length and perturbation magnitude of a random free surface topography as well as an internal random material distribution are studied.

  20. unWISE: Unblurred Coadds of the WISE Imaging

    NASA Astrophysics Data System (ADS)

    Lang, Dustin

    2014-05-01

    The Wide-field Infrared Survey Explorer (WISE) satellite observed the full sky in four mid-infrared bands in the 2.8-28 μm range. The primary mission was completed in 2010. The WISE team has done a superb job of producing a series of high-quality, well-documented, complete data releases in a timely manner. However, the "Atlas Image" coadds that are part of the recent AllWISE and previous data releases were intentionally blurred. Convolving the images by the point-spread function while coadding results in "matched-filtered" images that are close to optimal for detecting isolated point sources. But these matched-filtered images are sub-optimal or inappropriate for other purposes. For example, we are photometering the WISE images at the locations of sources detected in the Sloan Digital Sky Survey through forward modeling, and this blurring decreases the available signal-to-noise by effectively broadening the point-spread function. This paper presents a new set of coadds of the WISE images that have not been blurred. These images retain the intrinsic resolution of the data and are appropriate for photometry preserving the available signal-to-noise. Users should be cautioned, however, that the W3- and W4-band coadds contain artifacts around large, bright structures (large galaxies, dusty nebulae, etc.); eliminating these artifacts is the subject of ongoing work. These new coadds, and the code used to produce them, are publicly available at http://unwise.me.

  1. Confidence range estimate of extended source imagery acquisition algorithms via computer simulations. [in optical communication systems

    NASA Technical Reports Server (NTRS)

    Chen, CHIEN-C.; Hui, Elliot; Okamoto, Garret

    1992-01-01

    Spatial acquisition using the sun-lit Earth as a beacon source provides several advantages over active beacon-based systems for deep-space optical communication systems. However, since the angular extend of the Earth image is large compared to the laser beam divergence, the acquisition subsystem must be capable of resolving the image to derive the proper pointing orientation. The algorithms used must be capable of deducing the receiver location given the blurring introduced by the imaging optics and the large Earth albedo fluctuation. Furthermore, because of the complexity of modelling the Earth and the tracking algorithms, an accurate estimate of the algorithm accuracy can only be made via simulation using realistic Earth images. An image simulator was constructed for this purpose, and the results of the simulation runs are reported.

  2. A three-dimensional point process model for the spatial distribution of disease occurrence in relation to an exposure source.

    PubMed

    Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K

    2015-10-15

    We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Generation Mechanisms UV and X-ray Emissions During SL9 Impact

    NASA Technical Reports Server (NTRS)

    Waite, J. Hunter, Jr.

    1997-01-01

    The purpose of this grant was to study the ultraviolet and X-ray emissions associated with the impact of comet Shoemaker-Levy 9 with Jupiter. The University of Michigan task was primarily focused on theoretical calculations. The NAGW-4788 subtask was to be largely devoted to determining the constraints placed by the X-ray observations on the physical mechanisms responsible for the generation of the X-rays. Author summarized below the ROSAT observations and suggest a physical mechanism that can plausibly account for the observed emissions. It is hoped that the full set of activities can be completed at a later date. Further analysis of the ROSAT data acquired at the time of the impact was necessary to define the observational constraints on the magnetospheric-ionospheric processes involved in the excitation of the X-ray emissions associated with the fragment impacts. This analysis centered around improvements in the pointing accuracy and improvements in the timing information. Additional pointing information was made possible by the identification of the optical counterparts to the X-ray sources in the ROSAT field-of-view. Due to the large number of worldwide observers of the impacts, a serendipitous visible plate image from an observer in Venezuela provided a very accurate location of the present position of the X-ray source, virtually eliminating pointing errors in the data. Once refined, the pointing indicated that the two observed X-ray brightenings that were highly correlated in time with the K and P2 events were brightenings of the X-ray aurora (as identified in images prior to the impact).Appendix A "ROSAT observations of X-ray emissions from Jupiter during the impact of comet Shoemaker-Levy 9' also included.

  4. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  5. Multi-Wavelength Study of W40 HII Region

    NASA Astrophysics Data System (ADS)

    Shenoy, Sachindev S.; Shuping, R.; Vacca, W. D.

    2013-01-01

    W40 is an HII region (Sh2-64) within the Serpens molecular cloud in the Aquila rift region. Recent near infrared spectroscopic observations of the brightest members of the central cluster of W40 reveal that the region is powered by at least three early B-type stars and one late O-type star. Near and mid-infrared spectroscopy and photometry, combined with SED modeling of these sources, suggest that the distance to the cluster is between 455 and 535 pc, with about 10 mag of visual extinction. Velocity and extinction measurement of all the nearby regions i.e. Serpens main, Aquila rift, and MWC297 suggest that the entire system (including the W40 extended emission) is associated with the extinction wall at 260 pc. Here we present some preliminary results of a multi-wavelength study of the central cluster and the extended emission of W40. We used Spitzer IRAC data to measure accurate photometry of all the point sources within 4.32 pc of W40 via PRF fitting. This will provide us with a complete census of YSOs in the W40 region. The Spitzer data are combined with publicly available data in 2MASS, WISE and Hershel archives and used to model YSOs in the region. The SEDs and near-IR colors of all the point sources should allow us to determine the age of the central cluster of W40. The results from this work will put W40 in a proper stellar evolutionary context. After subtracting the point sources from the IRAC images, we are able to study the extended emission free from point source contamination. We choose a few morphologically interesting regions in W40 and use the data to model the dust emission. The results from this effort will allow us to study the correlation between dust properties and the large scale physical properties of W40.

  6. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    NASA Astrophysics Data System (ADS)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across regions, and across sectors demand for an integrated approach to assess main causes of water quality degradation.

  7. On the power output of some idealized source configurations with one or more characteristic dimensions

    NASA Technical Reports Server (NTRS)

    Levine, H.

    1982-01-01

    The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.

  8. Stochastic sensitivity analysis of nitrogen pollution to climate change in a river basin with complex pollution sources.

    PubMed

    Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing

    2017-12-01

    It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.

  9. An alternative screening model for the estimation of outdoor air concentration at large contaminated sites

    NASA Astrophysics Data System (ADS)

    Verginelli, Iason; Nocentini, Massimo; Baciocchi, Renato

    2017-09-01

    Simplified analytical solutions of fate and transport models are often used to carry out risk assessment on contaminated sites, to evaluate the long-term air quality in relation to volatile organic compounds in either soil or groundwater. Among the different assumptions employed to develop these solutions, in this work we focus on those used in the ASTM-RBCA ;box model; for the evaluation of contaminant dispersion in the atmosphere. In this simple model, it is assumed that the contaminant volatilized from the subsurface is dispersed in the atmosphere within a mixing height equal to two meters, i.e. the height of the breathing zone. In certain cases, this simplification could lead to an overestimation of the outdoor air concentration at the point of exposure. In this paper we first discuss the maximum source lengths (in the wind direction) for which the application of the ;box model; can be considered acceptable. Specifically, by comparing the results of ;box model; with the SCREEN3 model of U.S.EPA we found that under very stable atmospheric conditions (class F) the ASTM-RBCA approach provides acceptable results for source lengths up to 200 m while for very unstable atmospheric conditions (class A and B) the overestimation of the concentrations at the point of the exposure can be already observed for source lengths of only 10 m. In the latter case, the overestimation of the ;box model; can be of more than one order of magnitude for source lengths above 500 m. To overcome this limitation, in this paper we introduce a simple analytical solution that can be used for the calculation of the concentration at the point of exposure for large contaminated sites. The method consists in the introduction of an equivalent mixing zone height that allows to account for the dispersion of the contaminants along the source length while keeping the simplistic ;box model; approach that is implemented in most of risk assessment tools that are based on the ASTM-RBCA standard (e.g. RBCA toolkit). Based on our testing, we found that the developed model replicates very well the results of the more sophisticated dispersion SCREEN3 model with deviations always below 10%. The key advantage of this approach is that it can be very easily incorporated in the current risk assessment screening tools that are based on the ASTM standards while ensuring a more accurate evaluation of the concentration at the point of exposure.

  10. The Third Fermi LAT Catalog of High-Energy Gamma-ray Sources

    NASA Astrophysics Data System (ADS)

    Thompson, David J.; Ballet, J.; Burnett, T.; Fermi Large Area Telescope Collaboration

    2014-01-01

    The Fermi Gamma-ray Space Telescope Large Area Telescope (LAT) has been gathering science data since August 2008, surveying the full sky every three hours. The second source catalog (2FGL, Nolan et al 2012, ApJS 199, 31) was based on 2 years of data. We are preparing a third source catalog (3FGL) based on 4 years of reprocessed data. The reprocessing introduced a more accurate description of the instrument, which resulted in a narrower point spread function. Both the localization and the detection threshold for hard-spectrum sources have been improved. The new catalog also relies on a refined model of Galactic diffuse emission, particularly important for low-latitude soft-spectrum sources. The process for associating LAT sources with those at other wavelengths has also improved, thanks to dedicated multiwavelength follow-up, new surveys and better ways to extract sources likely to be gamma-ray counterparts. We describe the construction of this new catalog, its characteristics, and its remaining limitations.

  11. The Third Fermi-LAT Catalog of High-Energy Gamma-ray Sources

    NASA Astrophysics Data System (ADS)

    Burnett, Toby

    2014-03-01

    The Fermi Gamma-ray Space Telescope Large Area Telescope (LAT) has been gathering science data since August 2008, surveying the full sky every three hours. The second source catalog (2FGL, Nolan et al. 2012, ApJS 199, 31) was based on 2 years of data. We are preparing a third source catalog (3FGL) based on 4 years of reprocessed data. The reprocessing introduced a more accurate description of the instrument, which resulted in a narrower point spread function. Both the localization and the detection threshold for hard-spectrum sources have been improved. The new catalog also relies on a refined model of Galactic diffuse emission, particularly important for low-latitude soft-spectrum sources. The process for associating LAT sources with those at other wavelengths has also improved, thanks to dedicated multiwavelength follow-up, new surveys and better ways to extract sources likely to be gamma-ray counterparts. We describe the construction of this new catalog, its characteristics, and its remaining limitations.

  12. Potential of chicken by-products as sources of useful biological resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lasekan, Adeseye; Abu Bakar, Fatimah, E-mail: fatim@putra.upm.edu.my; Halal Products Research Institute, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor

    By-products from different animal sources are currently being utilised for beneficial purposes. Chicken processing plants all over the world generate large amount of solid by-products in form of heads, legs, bones, viscera and feather. These wastes are often processed into livestock feed, fertilizers and pet foods or totally discarded. Inappropriate disposal of these wastes causes environmental pollution, diseases and loss of useful biological resources like protein, enzymes and lipids. Utilisation methods that make use of these biological components for producing value added products rather than the direct use of the actual waste material might be another viable option for dealingmore » with these wastes. This line of thought has consequently led to researches on these wastes as sources of protein hydrolysates, enzymes and polyunsaturated fatty acids. Due to the multi-applications of protein hydrolysates in various branches of science and industry, and the large body of literature reporting the conversion of animal wastes to hydrolysates, a large section of this review was devoted to this subject. Thus, this review reports the known functional and bioactive properties of hydrolysates derived from chicken by-products as well their utilisation as source of peptone in microbiological media. Methods of producing these hydrolysates including their microbiological safety are discussed. Based on the few references available in the literature, the potential of some chicken by-product as sources of proteases and polyunsaturated fatty acids are pointed out along with some other future applications.« less

  13. Merging LIDAR digital terrain model with direct observed elevation points for urban flood numerical simulation

    NASA Astrophysics Data System (ADS)

    Arrighi, Chiara; Campo, Lorenzo

    2017-04-01

    In last years, the concern about the economical and lives loss due to urban floods has grown hand in hand with the numerical skills in simulating such events. The large amount of computational power needed in order to address the problem (simulating a flood in a complex terrain such as a medium-large city) is only one of the issues. Among them it is possible to consider the general lack of exhaustive observations during the event (exact extension, dynamic, water level reached in different parts of the involved area), needed for calibration and validation of the model, the need of considering the sewers effects, and the availability of a correct and precise description of the geometry of the problem. In large cities the topographic surveys are in general available with a number of points, but a complete hydraulic simulation needs a detailed description of the terrain on the whole computational domain. LIDAR surveys can achieve this goal, providing a comprehensive description of the terrain, although they often lack precision. In this work an optimal merging of these two sources of geometrical information, measured elevation points and LIDAR survey, is proposed, by taking into account the error variance of both. The procedure is applied to a flood-prone city over an area of 35 square km approximately starting with a DTM from LIDAR with a spatial resolution of 1 m, and 13000 measured points. The spatial pattern of the error (LIDAR vs points) is analysed, and the merging method is tested with a series of Jackknife procedures that take into account different densities of the available points. A discussion of the results is provided.

  14. Apportioning riverine DIN load to export coefficients of land uses in an urbanized watershed.

    PubMed

    Shih, Yu-Ting; Lee, Tsung-Yu; Huang, Jr-Chuan; Kao, Shuh-Ji; Chang

    2016-08-01

    The apportionment of riverine dissolved inorganic nitrogen (DIN) load to individual land use on a watershed scale demands the support of accurate DIN load estimation and differentiation of point and non-point sources, but both of them are rarely quantitatively determined in small montane watersheds. We introduced the Danshui River watershed of Taiwan, a mountainous urbanized watershed, to determine the export coefficients via a reverse Monte Carlo approach from riverine DIN load. The results showed that the dynamics of N fluctuation determines the load estimation method and sampling frequency. On a monthly sampling frequency basis, the average load estimation of the methods (GM, FW, and LI) outperformed that of individual method. Export coefficient analysis showed that the forest DIN yield of 521.5kg-Nkm(-2)yr(-1) was ~2.7-fold higher than the global riverine DIN yield (mainly from temperate large rivers with various land use compositions). Such a high yield was attributable to high rainfall and atmospheric N deposition. The export coefficient of agriculture was disproportionately larger than forest suggesting that a small replacement of forest to agriculture could lead to considerable change of DIN load. The analysis of differentiation between point and non-point sources showed that the untreated wastewater (non-point source), accounting for ~93% of the total human-associated wastewater, resulted in a high export coefficient of urban. The inclusion of the treated and untreated wastewater completes the N budget of wastewater. The export coefficient approach serves well to assess the riverine DIN load and to improve the understanding of N cascade. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  16. Using CSLD Method to Calculate COD Pollution Load of Wei River Watershed above Huaxian Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2017-12-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.

  17. Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2018-02-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.

  18. Methane bubbling from northern lakes: present and future contributions to the global methane budget.

    PubMed

    Walter, Katey M; Smith, Laurence C; Chapin, F Stuart

    2007-07-15

    Large uncertainties in the budget of atmospheric methane (CH4) limit the accuracy of climate change projections. Here we describe and quantify an important source of CH4 -- point-source ebullition (bubbling) from northern lakes -- that has not been incorporated in previous regional or global methane budgets. Employing a method recently introduced to measure ebullition more accurately by taking into account its spatial patchiness in lakes, we estimate point-source ebullition for 16 lakes in Alaska and Siberia that represent several common northern lake types: glacial, alluvial floodplain, peatland and thermokarst (thaw) lakes. Extrapolation of measured fluxes from these 16 sites to all lakes north of 45 degrees N using circumpolar databases of lake and permafrost distributions suggests that northern lakes are a globally significant source of atmospheric CH4, emitting approximately 24.2+/-10.5Tg CH4yr(-1). Thermokarst lakes have particularly high emissions because they release CH4 produced from organic matter previously sequestered in permafrost. A carbon mass balance calculation of CH4 release from thermokarst lakes on the Siberian yedoma ice complex suggests that these lakes alone would emit as much as approximately 49000Tg CH4 if this ice complex was to thaw completely. Using a space-for-time substitution based on the current lake distributions in permafrost-dominated and permafrost-free terrains, we estimate that lake emissions would be reduced by approximately 12% in a more probable transitional permafrost scenario and by approximately 53% in a 'permafrost-free' Northern Hemisphere. Long-term decline in CH4 ebullition from lakes due to lake area loss and permafrost thaw would occur only after the large release of CH4 associated thermokarst lake development in the zone of continuous permafrost.

  19. On the nature of the deeply embedded protostar OMC-2 FIR 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furlan, E.; Megeath, S. T.; Fischer, W. J.

    We use mid-infrared to submillimeter data from the Spitzer, Herschel, and Atacama Pathfinder Experiment telescopes to study the bright submillimeter source OMC-2 FIR 4. We find a point source at 8, 24, and 70 μm, and a compact, but extended source at 160, 350, and 870 μm. The peak of the emission from 8 to 70 μm, attributed to the protostar associated with FIR 4, is displaced relative to the peak of the extended emission; the latter represents the large molecular core the protostar is embedded within. We determine that the protostar has a bolometric luminosity of 37 L {submore » ☉}, although including more extended emission surrounding the point source raises this value to 86 L {sub ☉}. Radiative transfer models of the protostellar system fit the observed spectral energy distribution well and yield a total luminosity of most likely less than 100 L {sub ☉}. Our models suggest that the bolometric luminosity of the protostar could be as low as 12-14 L {sub ☉}, while the luminosity of the colder (∼20 K) extended core could be around 100 L {sub ☉}, with a mass of about 27 M {sub ☉}. Our derived luminosities for the protostar OMC-2 FIR 4 are in direct contradiction with previous claims of a total luminosity of 1000 L {sub ☉}. Furthermore, we find evidence from far-infrared molecular spectra and 3.6 cm emission that FIR 4 drives an outflow. The final stellar mass the protostar will ultimately achieve is uncertain due to its association with the large reservoir of mass found in the cold core.« less

  20. SOURCES AND TRANSFORMATIONS OF NITROGEN, CARBON, AND PHOSPHORUS IN THE POTOMAC RIVER ESTUARY

    NASA Astrophysics Data System (ADS)

    Pennino, M. J.; Kaushal, S.

    2009-12-01

    Global transport of nitrogen (N), carbon (C), and phosphorus (P) in river ecosystems has been dramatically altered due to urbanization. We examined the capacity of a major tributary of the Chesapeake Bay, the Potomac River, to transform carbon, nitrogen, and phosphorus inputs from the world’s largest advanced wastewater treatment facility (Washington D.C. Water and Sewer Authority). Surface water and effluent samples were collected along longitudinal transects of the Potomac River seasonally and compared to long-term interannual records of carbon, nitrogen, and phosphorus. Water samples from seasonal longitudinal transects were analyzed for dissolved organic and inorganic nitrogen and phosphorus, total organic carbon, and particulate carbon, nitrogen, and phosphorus. The source and quality of organic matter was characterized using fluorescence spectroscopy, excitation emission matrices (EEMs), and PARAFAC modeling. Sources of nitrate were tracked using stable isotopes of nitrogen and oxygen. Along the river network stoichiometric ratios of C, N, and P were determined across sites and related to changes in flow conditions. Land use data and historical water chemistry data were also compared to assess the relative importance of non-point sources from land-use change versus point-sources of carbon, nitrogen, and phosphorus. Preliminary data from EEMs suggested that more humic-like organic matter was important above the wastewater treatment plant, but more protein-like organic matter was present below the treatment plant. Levels of nitrate and ammonia showed increases within the vicinity of the wastewater treatment outfall, but decreased rapidly downstream, potentially indicating nutrient uptake and/or denitrification. Phosphate levels decreased gradually along the river with a small increase near the wastewater treatment plant and a larger increase and decrease further downstream near the high salinity zone. Total organic carbon levels show a small decrease downstream. Ecological stoichiometric ratios along the river indicate increases in C/N ratios downstream, but no corresponding trend with C/P ratios. The N/P ratios increased directly below the treatment plant and then decreased gradually downstream. The C/N/P ratios remained level until the last two sampling stations within 20 miles of the Chesapeake Bay, where there is a large increase. Despite large inputs, there may be large variations in sources and ecological stoichiometry along rivers and estuaries, and knowledge of these transformations will be important in predicting changes in the amounts, forms, and stoichiometry of nutrient loads to coastal waters.

  1. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  2. Embolic Strokes of Undetermined Source in the Athens Stroke Registry: An Outcome Analysis.

    PubMed

    Ntaios, George; Papavasileiou, Vasileios; Milionis, Haralampos; Makaritsis, Konstantinos; Vemmou, Anastasia; Koroboki, Eleni; Manios, Efstathios; Spengos, Konstantinos; Michel, Patrik; Vemmos, Konstantinos

    2015-08-01

    Information about outcomes in Embolic Stroke of Undetermined Source (ESUS) patients is unavailable. This study provides a detailed analysis of outcomes of a large ESUS population. Data set was derived from the Athens Stroke Registry. ESUS was defined according to the Cryptogenic Stroke/ESUS International Working Group criteria. End points were mortality, stroke recurrence, functional outcome, and a composite cardiovascular end point comprising recurrent stroke, myocardial infarction, aortic aneurysm rupture, systemic embolism, or sudden cardiac death. We performed Kaplan-Meier analyses to estimate cumulative probabilities of outcomes by stroke type and Cox-regression to investigate whether stroke type was outcome predictor. 2731 patients were followed-up for a mean of 30.5±24.1months. There were 73 (26.5%) deaths, 60 (21.8%) recurrences, and 78 (28.4%) composite cardiovascular end points in the 275 ESUS patients. The cumulative probability of survival in ESUS was 65.6% (95% confidence intervals [CI], 58.9%-72.2%), significantly higher compared with cardioembolic stroke (38.8%, 95% CI, 34.9%-42.7%). The cumulative probability of stroke recurrence in ESUS was 29.0% (95% CI, 22.3%-35.7%), similar to cardioembolic strokes (26.8%, 95% CI, 22.1%-31.5%), but significantly higher compared with all types of noncardioembolic stroke. One hundred seventy-two (62.5%) ESUS patients had favorable functional outcome compared with 280 (32.2%) in cardioembolic and 303 (60.9%) in large-artery atherosclerotic. ESUS patients had similar risk of composite cardiovascular end point as all other stroke types, with the exception of lacunar strokes, which had significantly lower risk (adjusted hazard ratio, 0.70 [95% CI, 0.52-0.94]). Long-term mortality risk in ESUS is lower compared with cardioembolic strokes, despite similar rates of recurrence and composite cardiovascular end point. Recurrent stroke risk is higher in ESUS than in noncardioembolic strokes. © 2015 American Heart Association, Inc.

  3. New klystron technology

    NASA Astrophysics Data System (ADS)

    Faillon, G.

    1985-10-01

    It is pointed out that klystrons representing high-power RF sources are mainly used in applications related to radars and scientific instrumentation. High peak power pulsed klystrons are discussed. It is found that a large number of linacs are powered by S-band klystrons (2.856 or 2.9985 GHz) with pulse durations of a few microseconds. Special precautions are being taken to insure that the breakdown voltage will not be reached, and very thin titanium coatings are employed to protect the ceramic against discharges. Attention is given to very large pulse width tubes, CW tubes, and limits of the power-frequency domain.

  4. A Comparative Study of Point Cloud Data Collection and Processing

    NASA Astrophysics Data System (ADS)

    Pippin, J. E.; Matheney, M.; Gentle, J. N., Jr.; Pierce, S. A.; Fuentes-Pineda, G.

    2016-12-01

    Over the past decade, there has been dramatic growth in the acquisition of publicly funded high-resolution topographic data for scientific, environmental, engineering and planning purposes. These data sets are valuable for applications of interest across a large and varied user community. However, because of the large volumes of data produced by high-resolution mapping technologies and expense of aerial data collection, it is often difficult to collect and distribute these datasets. Furthermore, the data can be technically challenging to process, requiring software and computing resources not readily available to many users. This study presents a comparison of advanced computing hardware and software that is used to collect and process point cloud datasets, such as LIDAR scans. Activities included implementation and testing of open source libraries and applications for point cloud data processing such as, Meshlab, Blender, PDAL, and PCL. Additionally, a suite of commercial scale applications, Skanect and Cloudcompare, were applied to raw datasets. Handheld hardware solutions, a Structure Scanner and Xbox 360 Kinect V1, were tested for their ability to scan at three field locations. The resultant data projects successfully scanned and processed subsurface karst features ranging from small stalactites to large rooms, as well as a surface waterfall feature. Outcomes support the feasibility of rapid sensing in 3D at field scales.

  5. Effect of transverse vibrations of fissile nuclei on the angular and spin distributions of low-energy fission fragments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunakov, V. E.; Kadmensky, S. G., E-mail: kadmensky@phys.vsu.ru; Lyubashevsky, D. E.

    2016-05-15

    It is shown that A. Bohr’s classic theory of angular distributions of fragments originating from low-energy fission should be supplemented with quantum corrections based on the involvement of a superposition of a very large number of angular momenta L{sub m} in the description of the relative motion of fragments flying apart along the straight line coincidentwith the symmetry axis. It is revealed that quantum zero-point wriggling-type vibrations of the fissile system in the vicinity of its scission point are a source of these angular momenta and of high fragment spins observed experimentally.

  6. Static telescope aberration measurement using lucky imaging techniques

    NASA Astrophysics Data System (ADS)

    López-Marrero, Marcos; Rodríguez-Ramos, Luis Fernando; Marichal-Hernández, José Gil; Rodríguez-Ramos, José Manuel

    2012-07-01

    A procedure has been developed to compute static aberrations once the telescope PSF has been measured with the lucky imaging technique, using a nearby star close to the object of interest as the point source to probe the optical system. This PSF is iteratively turned into a phase map at the pupil using the Gerchberg-Saxton algorithm and then converted to the appropriate actuation information for a deformable mirror having low actuator number but large stroke capability. The main advantage of this procedure is related with the capability of correcting static aberration at the specific pointing direction and without the need of a wavefront sensor.

  7. Tropical Convection's Roles in Tropical Tropopause Cirrus

    NASA Technical Reports Server (NTRS)

    Boehm, Matthew T.; Starr, David OC.; Verlinde, Johannes; Lee, Sukyoung

    2002-01-01

    The results presented here show that tropical convection plays a role in each of the three primary processes involved in the in situ formation of tropopause cirrus. First, tropical convection transports moisture from the surface into the upper troposphere. Second, tropical convection excites Rossby waves that transport zonal momentum toward the ITCZ, thereby generating rising motion near the equator. This rising motion helps transport moisture from where it is detrained from convection to the cold-point tropopause. Finally, tropical convection excites vertically propagating tropical waves (e.g. Kelvin waves) that provide one source of large-scale cooling near the cold-point tropopause, leading to tropopause cirrus formation.

  8. High spatial resolution detection of low-energy electrons using an event-counting method, application to point projection microscopy

    NASA Astrophysics Data System (ADS)

    Salançon, Evelyne; Degiovanni, Alain; Lapena, Laurent; Morin, Roger

    2018-04-01

    An event-counting method using a two-microchannel plate stack in a low-energy electron point projection microscope is implemented. 15 μm detector spatial resolution, i.e., the distance between first-neighbor microchannels, is demonstrated. This leads to a 7 times better microscope resolution. Compared to previous work with neutrons [Tremsin et al., Nucl. Instrum. Methods Phys. Res., Sect. A 592, 374 (2008)], the large number of detection events achieved with electrons shows that the local response of the detector is mainly governed by the angle between the hexagonal structures of the two microchannel plates. Using this method in point projection microscopy offers the prospect of working with a greater source-object distance (350 nm instead of 50 nm), advancing toward atomic resolution.

  9. Economics of electricity

    NASA Astrophysics Data System (ADS)

    Erdmann, G.

    2015-08-01

    The following text is an introduction into the economic theory of electricity supply and demand. The basic approach of economics has to reflect the physical peculiarities of electric power that is based on the directed movement of electrons from the minus pole to the plus pole of a voltage source. The regular grid supply of electricity is characterized by a largely constant frequency and voltage. Thus, from a physical point of view electricity is a homogeneous product. But from an economic point of view, electricity is not homogeneous. Wholesale electricity prices show significant fluctuations over time and between regions, because this product is not storable (in relevant quantities) and there may be bottlenecks in the transmission and distribution grids. The associated non-homogeneity is the starting point of the economic analysis of electricity markets.

  10. LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources

    NASA Astrophysics Data System (ADS)

    Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin

    2017-12-01

    Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.

  11. Vision in the dimmest habitats on earth.

    PubMed

    Warrant, Eric

    2004-10-01

    A very large proportion of the world's animal species are active in dim light, either under the cover of night or in the depths of the sea. The worlds they see can be dim and extended, with light reaching the eyes from all directions at once, or they can be composed of bright point sources, like the multitudes of stars seen in a clear night sky or the rare sparks of bioluminescence that are visible in the deep sea. The eye designs of nocturnal and deep-sea animals have evolved in response to these two very different types of habitats, being optimised for maximum sensitivity to extended scenes, or to point sources, or to both. After describing the many visual adaptations that have evolved across the animal kingdom for maximising sensitivity to extended and point-source scenes, I then use case studies from the recent literature to show how these adaptations have endowed nocturnal animals with excellent vision. Nocturnal animals can see colour and negotiate dimly illuminated obstacles during flight. They can also navigate using learned terrestrial landmarks, the constellations of stars or the dim pattern of polarised light formed around the moon. The conclusion from these studies is clear: nocturnal habitats are just as rich in visual details as diurnal habitats are, and nocturnal animals have evolved visual systems capable of exploiting them. The same is certainly true of deep-sea animals, as future research will no doubt reveal.

  12. Cosmic shear measurements with Dark Energy Survey Science Verification data

    DOE PAGES

    Becker, M. R.

    2016-07-06

    Here, we present measurements of weak gravitational lensing cosmic shear two-point statistics using Dark Energy Survey Science Verification data. We demonstrate that our results are robust to the choice of shear measurement pipeline, either ngmix or im3shape, and robust to the choice of two-point statistic, including both real and Fourier-space statistics. Our results pass a suite of null tests including tests for B-mode contamination and direct tests for any dependence of the two-point functions on a set of 16 observing conditions and galaxy properties, such as seeing, airmass, galaxy color, galaxy magnitude, etc. We use a large suite of simulationsmore » to compute the covariance matrix of the cosmic shear measurements and assign statistical significance to our null tests. We find that our covariance matrix is consistent with the halo model prediction, indicating that it has the appropriate level of halo sample variance. We also compare the same jackknife procedure applied to the data and the simulations in order to search for additional sources of noise not captured by the simulations. We find no statistically significant extra sources of noise in the data. The overall detection significance with tomography for our highest source density catalog is 9.7σ. Cosmological constraints from the measurements in this work are presented in a companion paper.« less

  13. New Global Bathymetry and Topography Model Grids

    NASA Astrophysics Data System (ADS)

    Smith, W. H.; Sandwell, D. T.; Marks, K. M.

    2008-12-01

    A new version of the "Smith and Sandwell" global marine topography model is available in two formats. A one-arc-minute Mercator projected grid covering latitudes to +/- 80.738 degrees is available in the "img" file format. Also available is a 30-arc-second version in latitude and longitude coordinates from pole to pole, supplied as tiles covering the same areas as the SRTM30 land topography data set. The new effort follows the Smith and Sandwell recipe, using publicly available and quality controlled single- and multi-beam echo soundings where possible and filling the gaps in the oceans with estimates derived from marine gravity anomalies observed by satellite altimetry. The altimeter data have been reprocessed to reduce the noise level and improve the spatial resolution [see Sandwell and Smith, this meeting]. The echo soundings database has grown enormously with new infusions of data from the U.S. Naval Oceanographic Office (NAVO), the National Geospatial-intelligence Agency (NGA), hydrographic offices around the world volunteering through the International Hydrographic Organization (IHO), and many other agencies and academic sources worldwide. These new data contributions have filled many holes: 50% of ocean grid points are within 8 km of a sounding point, 75% are within 24 km, and 90% are within 57 km. However, in the remote ocean basins some gaps still remain: 5% of the ocean grid points are more than 85 km from the nearest sounding control, and 1% are more than 173 km away. Both versions of the grid include a companion grid of source file numbers, so that control points may be mapped and traced to sources. We have compared the new model to multi-beam data not used in the compilation and find that 50% of differences are less than 25 m, 95% of differences are less than 130 m, but a few large differences remain in areas of poor sounding control and large-amplitude gravity anomalies. Land values in the solution are taken from SRTM30v2, GTOPO30 and ICESAT data. GEBCO has agreed to adopt this model and begin updating it in 2009. Ongoing tasks include building an uncertainty model and including information from the latest IBCAO map of the Arctic Ocean.

  14. A Chandra ACIS Study of 30 Doradus. II. X-Ray Point Sources in the Massive Star Cluster R136 and Beyond

    NASA Astrophysics Data System (ADS)

    Townsley, Leisa K.; Broos, Patrick S.; Feigelson, Eric D.; Garmire, Gordon P.; Getman, Konstantin V.

    2006-04-01

    We have studied the X-ray point-source population of the 30 Doradus (30 Dor) star-forming complex in the Large Magellanic Cloud using high spatial resolution X-ray images and spatially resolved spectra obtained with the Advanced CCD Imaging Spectrometer (ACIS) on board the Chandra X-Ray Observatory. Here we describe the X-ray sources in a 17'×17' field centered on R136, the massive star cluster at the center of the main 30 Dor nebula. We detect 20 of the 32 Wolf-Rayet stars in the ACIS field. The cluster R136 is resolved at the subarcsecond level into almost 100 X-ray sources, including many typical O3-O5 stars, as well as a few bright X-ray sources previously reported. Over 2 orders of magnitude of scatter in LX is seen among R136 O stars, suggesting that X-ray emission in the most massive stars depends critically on the details of wind properties and the binarity of each system, rather than reflecting the widely reported characteristic value LX/Lbol~=10-7. Such a canonical ratio may exist for single massive stars in R136, but our data are too shallow to confirm this relationship. Through this and future X-ray studies of 30 Dor, the complete life cycle of a massive stellar cluster can be revealed.

  15. A New Global Anthropogenic SO2 Emission Inventory for the Last Decade: A Mosaic of Satellite-derived and Bottom-up Emissions

    NASA Astrophysics Data System (ADS)

    Liu, F.; Joiner, J.; Choi, S.; Krotkov, N. A.; Li, C.; Fioletov, V. E.; McLinden, C. A.

    2017-12-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor have been used to detect emissions from large point sources using an innovative estimation technique. Emissions from about 500 sources have been quantified individually based on OMI observations, accounting for about a half of total reported anthropogenic SO2 emissions. We developed a new emission inventory, OMI-HTAP, by combining these OMI-based emission estimates and the conventional bottom-up inventory. OMI-HTAP includes OMI-based estimates for over 400 point sources and is gap-filled with the emission grid map of the latest available global bottom-up emission inventory (HTAP v2.2) for the rest of sources. We have evaluated the OMI-HTAP inventory by performing simulations with the Goddard Earth Observing System version 5 (GEOS-5) model. The GEOS-5 simulated SO2 concentrations driven by both the HTAP and the OMI-HTAP inventory were compared against in-situ and satellite measurements. Results show that the OMI-HTAP inventory improves the model agreement with observations, in particular over the US, India and the Middle East. Additionally, simulations with the OMI-HTAP inventory capture the major trends of anthropogenic SO2 emissions over the world and highlight the influence of missing sources in the bottom-up inventory.

  16. LAVA: Large scale Automated Vulnerability Addition

    DTIC Science & Technology

    2016-05-23

    memory copy, e.g., are reasonable attack points. If the goal is to inject divide- by-zero, then arithmetic operations involving division will be...ways. First, it introduces deterministic record and replay , which can be used for iterated and expensive analyses that cannot be performed online... memory . Since our approach records the correspondence between source lines and program basic block execution, it would be just as easy to figure out

  17. Simulation of Spiral Waves and Point Sources in Atrial Fibrillation with Application to Rotor Localization

    PubMed Central

    Ganesan, Prasanth; Shillieto, Kristina E.; Ghoraani, Behnaz

    2018-01-01

    Cardiac simulations play an important role in studies involving understanding and investigating the mechanisms of cardiac arrhythmias. Today, studies of arrhythmogenesis and maintenance are largely being performed by creating simulations of a particular arrhythmia with high accuracy comparable to the results of clinical experiments. Atrial fibrillation (AF), the most common arrhythmia in the United States and many other parts of the world, is one of the major field where simulation and modeling is largely used. AF simulations not only assist in understanding its mechanisms but also help to develop, evaluate and improve the computer algorithms used in electrophysiology (EP) systems for ablation therapies. In this paper, we begin with a brief overeview of some common techniques used in simulations to simulate two major AF mechanisms – spiral waves (or rotors) and point (or focal) sources. We particularly focus on 2D simulations using Nygren et al.’s mathematical model of human atrial cell. Then, we elucidate an application of the developed AF simulation to an algorithm designed for localizing AF rotors for improving current AF ablation therapies. Our simulation methods and results, along with the other discussions presented in this paper is aimed to provide engineers and professionals with a working-knowledge of application-specific simulations of spirals and foci. PMID:29629398

  18. A High-Emissivity Blackbody with Large Aperture for Radiometric Calibration at Low-Temperature

    NASA Astrophysics Data System (ADS)

    Ko, Hsin-Yi; Wen, Bor-Jiunn; Tsa, Shu-Fei; Li, Guo-Wei

    2009-02-01

    A newly designed high-emissivity cylindrical blackbody source with a large diameter aperture (54 mm), an internal triangular-grooved surface, and concentric grooves on the bottom surface was immersed in a temperature-controlled, stirred-liquid bath. The stirred-liquid bath can be stabilized to better than 0.05°C at temperatures between 30 °C and 70 °C, with traceability to the ITS-90 through a platinum resistance thermometer (PRT) calibrated at the fixed points of indium, gallium, and the water triple point. The temperature uniformity of the blackbody from the bottom to the front of the cavity is better than 0.05 % of the operating temperature (in °C). The heat loss of the cavity is less than 0.03 % of the operating temperature as determined with a radiation thermometer by removing an insulating lid without the gas purge operating. Optical ray tracing with a Monte Carlo method (STEEP 3) indicated that the effective emissivity of this blackbody cavity is very close to unity. The size-of-source effect (SSE) of the radiation thermometer and the effective emissivity of the blackbody were considered in evaluating the uncertainty of the blackbody. The blackbody uncertainty budget and performance are described in this paper.

  19. Dependence of Adaptive Cross-correlation Algorithm Performance on the Extended Scene Image Quality

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2008-01-01

    Recently, we reported an adaptive cross-correlation (ACC) algorithm to estimate with high accuracy the shift as large as several pixels between two extended-scene sub-images captured by a Shack-Hartmann wavefront sensor. It determines the positions of all extended-scene image cells relative to a reference cell in the same frame using an FFT-based iterative image-shifting algorithm. It works with both point-source spot images as well as extended scene images. We have demonstrated previously based on some measured images that the ACC algorithm can determine image shifts with as high an accuracy as 0.01 pixel for shifts as large 3 pixels, and yield similar results for both point source spot images and extended scene images. The shift estimate accuracy of the ACC algorithm depends on illumination level, background, and scene content in addition to the amount of the shift between two image cells. In this paper we investigate how the performance of the ACC algorithm depends on the quality and the frequency content of extended scene images captured by a Shack-Hatmann camera. We also compare the performance of the ACC algorithm with those of several other approaches, and introduce a failsafe criterion for the ACC algorithm-based extended scene Shack-Hatmann sensors.

  20. Agriculture is a major source of NOx pollution in California

    PubMed Central

    Almaraz, Maya; Bai, Edith; Wang, Chao; Trousdell, Justin; Conley, Stephen; Faloona, Ian; Houlton, Benjamin Z.

    2018-01-01

    Nitrogen oxides (NOx = NO + NO2) are a primary component of air pollution—a leading cause of premature death in humans and biodiversity declines worldwide. Although regulatory policies in California have successfully limited transportation sources of NOx pollution, several of the United States’ worst–air quality districts remain in rural regions of the state. Site-based findings suggest that NOx emissions from California’s agricultural soils could contribute to air quality issues; however, a statewide estimate is hitherto lacking. We show that agricultural soils are a dominant source of NOx pollution in California, with especially high soil NOx emissions from the state’s Central Valley region. We base our conclusion on two independent approaches: (i) a bottom-up spatial model of soil NOx emissions and (ii) top-down airborne observations of atmospheric NOx concentrations over the San Joaquin Valley. These approaches point to a large, overlooked NOx source from cropland soil, which is estimated to increase the NOx budget by 20 to 51%. These estimates are consistent with previous studies of point-scale measurements of NOx emissions from the soil. Our results highlight opportunities to limit NOx emissions from agriculture by investing in management practices that will bring co-benefits to the economy, ecosystems, and human health in rural areas of California. PMID:29399630

  1. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting

    PubMed Central

    2017-01-01

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921

  2. Predictions for Swift Follow-up Observations of Advanced LIGO/Virgo Gravitational Wave Sources

    NASA Astrophysics Data System (ADS)

    Racusin, Judith; Evans, Phil; Connaughton, Valerie

    2015-04-01

    The likely detection of gravitational waves associated with the inspiral of neutron star binaries by the upcoming advanced LIGO/Virgo observatories will be complemented by searches for electromagnetic counterparts over large areas of the sky by Swift and other observatories. As short gamma-ray bursts (GRB) are the most likely electromagnetic counterpart candidates to these sources, we can make predictions based upon the last decade of GRB observations by Swift and Fermi. Swift is uniquely capable of accurately localizing new transients rapidly over large areas of the sky in single and tiled pointings, enabling ground-based follow-up. We describe simulations of the detectability of short GRB afterglows by Swift given existing and hypothetical tiling schemes with realistic observing conditions and delays, which guide the optimal observing strategy and improvements provided by coincident detection with observatories such as Fermi-GBM.

  3. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  4. Nutrient pollution of coastal rivers, bays, and seas

    USGS Publications Warehouse

    Howarth, Robert; Anderson, Donald; Cloern, James; Elfring, Chris; Hopkinson, Charles; Lapointe, Brian; Malone, Tom; Marcus, Nancy; McGlathery, Karen; Sharpley , Andrew; Walker, Dan

    2000-01-01

    Over the past 40 years, antipollution laws have greatly reduced discharges of toxic substances into our coastal waters. This effort, however, has focused largely on point-source pollution of industrial and municipal effluent. No comparable effort has been made to restrict the input of nitrogen (N) from municipal effluent, nor to control the flows of N and phosphorus (P) that enter waterways from dispersed or nonpoint sources such as agricultural and urban runoff or as airborne pollutants. As a result, inputs of nonpoint pollutants, particularly N, have increased dramatically. Nonpoint pollution from N and P now represents the largest pollution problem facing the vital coastal waters of the United States.

  5. Oxidative potential and inflammatory impacts of source apportioned ambient air pollution in Beijing.

    PubMed

    Liu, Qingyang; Baumgartner, Jill; Zhang, Yuanxun; Liu, Yanju; Sun, Yongjun; Zhang, Meigen

    2014-11-04

    Air pollution exposure is associated with a range of adverse health impacts. Knowledge of the chemical components and sources of air pollution most responsible for these health effects could lead to an improved understanding of the mechanisms of such effects and more targeted risk reduction strategies. We measured daily ambient fine particulate matter (<2.5 μm in aerodynamic diameter; PM2.5) for 2 months in peri-urban and central Beijing, and assessed the contribution of its chemical components to the oxidative potential of ambient air pollution using the dithiothreitol (DTT) assay. The composition data were applied to a multivariate source apportionment model to determine the PM contributions of six sources or factors: a zinc factor, an aluminum factor, a lead point factor, a secondary source (e.g., SO4(2-), NO3(2-)), an iron source, and a soil dust source. Finally, we assessed the relationship between reactive oxygen species (ROS) activity-related PM sources and inflammatory responses in human bronchial epithelial cells. In peri-urban Beijing, the soil dust source accounted for the largest fraction (47%) of measured ROS variability. In central Beijing, a secondary source explained the greatest fraction (29%) of measured ROS variability. The ROS activities of PM collected in central Beijing were exponentially associated with in vivo inflammatory responses in epithelial cells (R2=0.65-0.89). We also observed a high correlation between three ROS-related PM sources (a lead point factor, a zinc factor, and a secondary source) and expression of an inflammatory marker (r=0.45-0.80). Our results suggest large differences in the contribution of different PM sources to ROS variability at the central versus peri-urban study sites in Beijing and that secondary sources may play an important role in PM2.5-related oxidative potential and inflammatory health impacts.

  6. Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Liu, WanQing

    2018-02-01

    TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.

  7. Hydrogen and oxygen isotopic compositions of waters from fumaroles at Kilauea summit, Hawaii

    USGS Publications Warehouse

    Hinkley, T.K.; Quick, J.E.; Gregory, R.T.; Gerlach, T.M.

    1995-01-01

    Condensate samples were collected in 1992 from a high-temperature (300?? C) fumarole on the floor of the Halemaumau Pit Crater at Kilauea. The emergence about two years earlier of such a hot fumarole was unprecedented at such a central location at Kilauea. The condensates have hydrogen and oxygen isotopic compositions which indicate that the waters emitted by the fumarole are composed largely of meteoric water, that any magmatic water component must be minor, and that the precipitation that was the original source to the fumarole fell on a recharge area on the slopes of Mauna Loa Volcano to the west. However, the fumarole has no tritium, indicating that it taps a source of water that has been isolated from atmospheric water for at least 40 years. It is noteworthy, considering the unstable tectonic environment and abundant local rainfall of the Kilauea and Mauna Loa regions, that waters which are sources to the hot fumarole remain uncontaminated from atmospheric sources over such long times and long transport distances. As for the common, boiling point fumaroles of the Kilauea summit region, their 18O, D and tritium concentrations indicate that they are dominated by recycling of present day meteoric water. Though the waters of both hot and boiling point fumaroles have dominantly meteoric sources, they seem to be from separate hydrological regimes. Large concentrations of halogens and sulfur species in the condensates, together with the location at the center of the Kilauea summit region and the high temperature, initially suggested that much of the total mass of the emissions of the hot fumarole, including the H2O, might have come directly from a magma body. The results of the present study indicate that it is unreliable to infer a magmatic origin of volcanic waters based solely on halogen or sulfur contents, or other aspects of chemical composition of total condensates. ?? 1995 Springer-Verlag.

  8. Scanning properties of large dual-shaped offset and symmetric reflector antennas

    NASA Astrophysics Data System (ADS)

    Galindo-Israel, Victor; Veruttipong, Watt; Norrod, Roger D.; Imbriale, William A.

    1992-04-01

    Several characteristics of dual offset (DOSR) and symmetric shaped reflectors are examined. Among these is the amelioration of the added cost of manufacturing a shaped reflector antenna, particularly a doubly curved surface for the DOSR, if adjustable panels, which may be necessary for correction of gravity and wind distortions, are also used for improving gain by shaping. The scanning properties of shaped reflectors, both offset and circularly symmetric, are examined and compared to conic section scanning characteristics. Scanning of the pencil beam is obtained by lateral and axial translation of a single point-source feed. The feed is kept pointed toward the center of the subreflector. The effects of power spillover and aperture phase error as a function of beam scanning is examined for several different types of large reflector designs including DOSR, circularly symmetric large f/D and smaller f/D dual reflector antenna systems. It is graphically illustrated that the Abbe-sine condition for improving scanning of an optical system cannot, inherently, be satisfied in a dual-shaped reflector system shaped for high gain and low feed spillover.

  9. Scaling relations for large Martian valleys

    NASA Astrophysics Data System (ADS)

    Som, Sanjoy M.; Montgomery, David R.; Greenberg, Harvey M.

    2009-02-01

    The dendritic morphology of Martian valley networks, particularly in the Noachian highlands, has long been argued to imply a warmer, wetter early Martian climate, but the character and extent of this period remains controversial. We analyzed scaling relations for the 10 large valley systems incised in terrain of various ages, resolvable using the Mars Orbiter Laser Altimeter (MOLA) and the Thermal Emission Imaging System (THEMIS). Four of the valleys originate in point sources with negligible contributions from tributaries, three are very poorly dissected with a few large tributaries separated by long uninterrupted trunks, and three exhibit the dendritic, branching morphology typical of terrestrial channel networks. We generated width-area and slope-area relationships for each because these relations are identified as either theoretically predicted or robust terrestrial empiricisms for graded precipitation-fed, perennial channels. We also generated distance-area relationships (Hack's law) because they similarly represent robust characteristics of terrestrial channels (whether perennial or ephemeral). We find that the studied Martian valleys, even the dendritic ones, do not satisfy those empiricisms. On Mars, the width-area scaling exponent b of -0.7-4.7 contrasts with values of 0.3-0.6 typical of terrestrial channels; the slope-area scaling exponent $\\theta$ ranges from -25.6-5.5, whereas values of 0.3-0.5 are typical on Earth; the length-area, or Hack's exponent n ranges from 0.47 to 19.2, while values of 0.5-0.6 are found on Earth. None of the valleys analyzed satisfy all three relations typical of terrestrial perennial channels. As such, our analysis supports the hypotheses that ephemeral and/or immature channel morphologies provide the closest terrestrial analogs to the dendritic networks on Mars, and point source discharges provide terrestrial analogs best suited to describe the other large Martian valleys.

  10. Source counting in MEG neuroimaging

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.

    2009-02-01

    Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.

  11. The isotropic radio background revisited

    NASA Astrophysics Data System (ADS)

    Fornengo, Nicolao; Lineros, Roberto A.; Regis, Marco; Taoso, Marco

    2014-04-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.

  12. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  13. Quantification of Greenhouse Gas Emission Rates from strong Point Sources by Space-borne IPDA Lidar Measurements: Results from a Sensitivity Analysis Study

    NASA Astrophysics Data System (ADS)

    Ehret, G.; Kiemle, C.; Rapp, M.

    2017-12-01

    The practical implementation of the Paris Agreement (COP21) vastly profit from an independent, reliable and global measurement system of greenhouse gas emissions, in particular of CO2, in order to complement and cross-check national efforts. Most fossil-fuel CO2 emitters emanate from large sources such as cities and power plants. These emissions increase the local CO2 abundance in the atmosphere by 1-10 parts per million (ppm) which is a signal that is significantly larger than the variability from natural sources and sinks over the local source domain. Despite these large signals, they are only sparsely sampled by the ground-based network which calls for satellite measurements. However, none of the existing and forthcoming passive satellite instruments, operating in the NIR spectral domain, can measure CO2 emissions at night time or in low sunlight conditions and in high latitude regions in winter times. The resulting sparse coverage of passive spectrometers is a serious limitation, particularly for the Northern Hemisphere, since these regions exhibit substantial emissions during the winter as well as other times of the year. In contrast, CO2 measurements by an Integrated Path Differential Absorption (IPDA) Lidar are largely immune to these limitations and initial results from airborne application look promising. In this study, we discuss the implication for a space-borne IPDA Lidar system. A Gaussian plume model will be used to simulate the CO2-distribution of large power plants downstream to the source. The space-borne measurements are simulated by applying a simple forward model based on Gaussian error distribution. Besides the sampling frequency, the sampling geometry (e.g. measurement distance to the emitting source) and the error of the measurement itself vastly impact on the flux inversion performance. We will discuss the results by incorporating Gaussian plume and mass budget approaches to quantify the emission rates.

  14. APT: Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ

    2012-08-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  15. Understanding Slat Noise Sources

    NASA Technical Reports Server (NTRS)

    Khorrami, Medhi R.

    2003-01-01

    Model-scale aeroacoustic tests of large civil transports point to the leading-edge slat as a dominant high-lift noise source in the low- to mid-frequencies during aircraft approach and landing. Using generic multi-element high-lift models, complementary experimental and numerical tests were carefully planned and executed at NASA in order to isolate slat noise sources and the underlying noise generation mechanisms. In this paper, a brief overview of the supporting computational effort undertaken at NASA Langley Research Center, is provided. Both tonal and broadband aspects of slat noise are discussed. Recent gains in predicting a slat s far-field acoustic noise, current shortcomings of numerical simulations, and other remaining open issues, are presented. Finally, an example of the ever-expanding role of computational simulations in noise reduction studies also is given.

  16. ProFound: Source Extraction and Application to Modern Survey Data

    NASA Astrophysics Data System (ADS)

    Robotham, A. S. G.

    2018-04-01

    ProFound detects sources in noisy images, generates segmentation maps identifying the pixels belonging to each source, and measures statistics like flux, size, and ellipticity. These inputs are key requirements of ProFit (ascl:1612.004), our galaxy profiling package; these two packages used in unison semi-automatically profile large samples of galaxies. The key novel feature introduced in ProFound is that all photometry is executed on dilated segmentation maps that fully contain the identifiable flux, rather than using more traditional circular or ellipse-based photometry. Also, to be less sensitive to pathological segmentation issues, the de-blending is made across saddle points in flux. ProFound offers good initial parameter estimation for ProFit, and also segmentation maps that follow the sometimes complex geometry of resolved sources, whilst capturing nearly all of the flux. A number of bulge-disc decomposition projects are already making use of the ProFound and ProFit pipeline.

  17. Methane - quick fix or tough target? New methods to reduce emissions.

    NASA Astrophysics Data System (ADS)

    Nisbet, E. G.; Lowry, D.; Fisher, R. E.; Brownlow, R.

    2016-12-01

    Methane is a cost-effective target for greenhouse gas reduction efforts. The UK's MOYA project is designed to improve understanding of the global methane budget and to point to new methods to reduce future emissions. Since 2007, methane has been increasing rapidly: in 2014 and 2015 growth was at rates last seen in the 1980s. Unlike 20thcentury growth, primarily driven by fossil fuel emissions in northern industrial nations, isotopic evidence implies present growth is driven by tropical biogenic sources such as wetlands and agriculture. Discovering why methane is rising is important. Schaefer et al. (Science, 2016) pointed out the potential clash between methane reduction efforts and food needs of a rising, better-fed (physically larger) human population. Our own work suggests tropical wetlands are major drivers of growth, responding to weather changes since 2007, but there is no acceptable way to reduce wetland emission. Just as sea ice decline indicates Arctic warming, methane may be the most obvious tracker of climate change in the wet tropics. Technical advances in instrumentation can do much in helping cut urban and industrial methane emissions. Mobile systems can be mounted on vehicles, while drone sampling can provide a 3D view to locate sources. Urban land planning often means large but different point sources are typically clustered (e.g. landfill or sewage plant near incinerator; gas wells next to cattle). High-precision grab-sample isotopic characterisation, using Keeling plots, can separate source signals, to identify specific emitters, even where they are closely juxtaposed. Our mobile campaigns in the UK, Kuwait, Hong Kong and E. Australia show the importance of major single sources, such as abandoned old wells, pipe leaks, or unregulated landfills. If such point sources can be individually identified, even when clustered, they will allow effective reduction efforts to occur: these can be profitable and/or improve industrial safety, for example in the case of gas leaks. Fossil fuels, landfills, waste, and biomass burning emit about 200 Tg/yr, or 35-40% of global methane emissions. Using inexpensive 3D mobile surveys coupled with high-precision isotopic measurement, it should be possible to cut emissions sharply, substantially reducing the methane burden even if tropical biogenic sources increase.

  18. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES IN NEARBY GALAXIES. II. X-RAY LUMINOSITY FUNCTIONS AND ULTRALUMINOUS X-RAY SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Song; Qiu, Yanli; Liu, Jifeng

    Based on the recently completed Chandra /ACIS survey of X-ray point sources in nearby galaxies, we study the X-ray luminosity functions (XLFs) for X-ray point sources in different types of galaxies and the statistical properties of ultraluminous X-ray sources (ULXs). Uniform procedures are developed to compute the detection threshold, to estimate the foreground/background contamination, and to calculate the XLFs for individual galaxies and groups of galaxies, resulting in an XLF library of 343 galaxies of different types. With the large number of surveyed galaxies, we have studied the XLFs and ULX properties across different host galaxy types, and confirm withmore » good statistics that the XLF slope flattens from lenticular ( α ∼ 1.50 ± 0.07) to elliptical (∼1.21 ± 0.02), to spirals (∼0.80 ± 0.02), to peculiars (∼0.55 ± 0.30), and to irregulars (∼0.26 ± 0.10). The XLF break dividing the neutron star and black hole binaries is also confirmed, albeit at quite different break luminosities for different types of galaxies. A radial dependency is found for ellipticals, with a flatter XLF slope for sources located between D {sub 25} and 2 D {sub 25}, suggesting the XLF slopes in the outer region of early-type galaxies are dominated by low-mass X-ray binaries in globular clusters. This study shows that the ULX rate in early-type galaxies is 0.24 ± 0.05 ULXs per surveyed galaxy, on a 5 σ confidence level. The XLF for ULXs in late-type galaxies extends smoothly until it drops abruptly around 4 × 10{sup 40} erg s{sup −1}, and this break may suggest a mild boundary between the stellar black hole population possibly including 30 M {sub ⊙} black holes with super-Eddington radiation and intermediate mass black holes.« less

  19. Plasmonic micropillars for precision cell force measurement across a large field-of-view

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Wen, Ximiao; Tan, Xing Haw Marvin; Chiou, Pei-Yu

    2018-01-01

    A plasmonic micropillar platform with self-organized gold nanospheres is reported for the precision cell traction force measurement across a large field-of-view (FOV). Gold nanospheres were implanted into the tips of polymer micropillars by annealing gold microdisks with nanosecond laser pulses. Each gold nanosphere is physically anchored in the center of a pillar tip and serves as a strong, point-source-like light scattering center for each micropillar. This allows a micropillar to be clearly observed and precisely tracked even under a low magnification objective lens for the concurrent and precision measurement across a large FOV. A spatial resolution of 30 nm for the pillar deflection measurement has been accomplished on this platform with a 20× objective lens.

  20. Semi-Tomographic Gamma Scanning Technique for Non-Destructive Assay of Radioactive Waste Drums

    NASA Astrophysics Data System (ADS)

    Gu, Weiguo; Rao, Kaiyuan; Wang, Dezhong; Xiong, Jiemei

    2016-12-01

    Segmented gamma scanning (SGS) and tomographic gamma scanning (TGS) are two traditional detection techniques for low and intermediate level radioactive waste drum. This paper proposes one detection method named semi-tomographic gamma scanning (STGS) to avoid the poor detection accuracy of SGS and shorten detection time of TGS. This method and its algorithm synthesize the principles of SGS and TGS. In this method, each segment is divided into annual voxels and tomography is used in the radiation reconstruction. The accuracy of STGS is verified by experiments and simulations simultaneously for the 208 liter standard waste drums which contains three types of nuclides. The cases of point source or multi-point sources, uniform or nonuniform materials are employed for comparison. The results show that STGS exhibits a large improvement in the detection performance, and the reconstruction error and statistical bias are reduced by one quarter to one third or less for most cases if compared with SGS.

  1. Eta Carinae: Viewed from Multiple Vantage Points

    NASA Technical Reports Server (NTRS)

    Gull, Theodore

    2007-01-01

    The central source of Eta Carinae and its ejecta is a massive binary system buried within a massive interacting wind structure which envelops the two stars. However the hot, less massive companion blows a small cavity in the very massive primary wind, plus ionizes a portion of the massive wind just beyond the wind-wind boundary. We gain insight on this complex structure by examining the spatially-resolved Space Telescope Imaging Spectrograph (STIS) spectra of the central source (0.1") with the wind structure which extends out to nearly an arcsecond (2300AU) and the wind-blown boundaries, plus the ejecta of the Little Homunculus. Moreover, the spatially resolved Very Large Telescope/UltraViolet Echelle Spectrograph (VLT/UVES) stellar spectrum (one arcsecond) and spatially sampled spectra across the foreground lobe of the Homunculus provide us vantage points from different angles relative to line of sight. Examples of wind line profiles of Fe II, and the.highly excited [Fe III], [Ne III], [Ar III] and [S III)], plus other lines will be presented.

  2. Uncertainty of exploitation estimates made from tag returns

    USGS Publications Warehouse

    Miranda, L.E.; Brock, R.E.; Dorr, B.S.

    2002-01-01

    Over 6,000 crappies Pomoxis spp. were tagged in five water bodies to estimate exploitation rates by anglers. Exploitation rates were computed as the percentage of tags returned after adjustment for three sources of uncertainty: postrelease mortality due to the tagging process, tag loss, and the reporting rate of tagged fish. Confidence intervals around exploitation rates were estimated by resampling from the probability distributions of tagging mortality, tag loss, and reporting rate. Estimates of exploitation rates ranged from 17% to 54% among the five study systems. Uncertainty around estimates of tagging mortality, tag loss, and reporting resulted in 90% confidence intervals around the median exploitation rate as narrow as 15 percentage points and as broad as 46 percentage points. The greatest source of estimation error was uncertainty about tag reporting. Because the large investments required by tagging and reward operations produce imprecise estimates of the exploitation rate, it may be worth considering other approaches to estimating it or simply circumventing the exploitation question altogether.

  3. Grating-assisted demodulation of interferometric optical sensors.

    PubMed

    Yu, Bing; Wang, Anbo

    2003-12-01

    Accurate and dynamic control of the operating point of an interferometric optical sensor to produce the highest sensitivity is crucial in the demodulation of interferometric optical sensors to compensate for manufacturing errors and environmental perturbations. A grating-assisted operating-point tuning system has been designed that uses a diffraction grating and feedback control, functions as a tunable-bandpass optical filter, and can be used as an effective demodulation subsystem in sensor systems based on optical interferometers that use broadband light sources. This demodulation method has no signal-detection bandwidth limit, a high tuning speed, a large tunable range, increased interference fringe contrast, and the potential for absolute optical-path-difference measurement. The achieved 40-nm tuning range, which is limited by the available source spectrum width, 400-nm/s tuning speed, and a step resolution of 0.4 nm, is sufficient for most practical measurements. A significant improvement in signal-to-noise ratio in a fiber Fabry-Perot acoustic-wave sensor system proved that the expected fringe contrast and sensitivity increase.

  4. Quantitative evaluation of software packages for single-molecule localization microscopy.

    PubMed

    Sage, Daniel; Kirshner, Hagai; Pengo, Thomas; Stuurman, Nico; Min, Junhong; Manley, Suliana; Unser, Michael

    2015-08-01

    The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

  5. Ford Motor Company NDE facility shielding design.

    PubMed

    Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H

    2005-01-01

    Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.

  6. Modeling UV Radiation Feedback from Massive Stars. I. Implementation of Adaptive Ray-tracing Method and Tests

    NASA Astrophysics Data System (ADS)

    Kim, Jeong-Gyu; Kim, Woong-Tae; Ostriker, Eve C.; Skinner, M. Aaron

    2017-12-01

    We present an implementation of an adaptive ray-tracing (ART) module in the Athena hydrodynamics code that accurately and efficiently handles the radiative transfer involving multiple point sources on a three-dimensional Cartesian grid. We adopt a recently proposed parallel algorithm that uses nonblocking, asynchronous MPI communications to accelerate transport of rays across the computational domain. We validate our implementation through several standard test problems, including the propagation of radiation in vacuum and the expansions of various types of H II regions. Additionally, scaling tests show that the cost of a full ray trace per source remains comparable to that of the hydrodynamics update on up to ∼ {10}3 processors. To demonstrate application of our ART implementation, we perform a simulation of star cluster formation in a marginally bound, turbulent cloud, finding that its star formation efficiency is 12% when both radiation pressure forces and photoionization by UV radiation are treated. We directly compare the radiation forces computed from the ART scheme with those from the M1 closure relation. Although the ART and M1 schemes yield similar results on large scales, the latter is unable to resolve the radiation field accurately near individual point sources.

  7. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  8. Imaging Young Stellar Objects with VLTi/PIONIER

    NASA Astrophysics Data System (ADS)

    Kluska, J.; Malbet, F.; Berger, J.-P.; Benisty, M.; Lazareff, B.; Le Bouquin, J.-B.; Baron, F.; Dominik, C.; Isella, A.; Juhasz, A.; Kraus, S.; Lachaume, R.; Ménard, F.; Millan-Gabet, R.; Monnier, J.; Pinte, C.; Soulez, F.; Tallon, M.; Thi, W.-F.; Thiébaut, É.; Zins, G.

    2014-04-01

    Optical interferometry imaging is designed to help us to reveal complex astronomical sources without a prior model. Among these complex objects are the young stars and their environments, which have a typical morphology with a point-like source, surrounded by circumstellar material with unknown morphology. To image them, we have developed a numerical method that removes completely the stellar point source and reconstructs the rest of the image, using the differences in the spectral behavior between the star and its circumstellar material. We aim to reveal the first Astronomical Units of these objects where many physical phenomena could interplay: the dust sublimation causing a puffed-up inner rim, a dusty halo, a dusty wind or an inner gaseous component. To investigate more deeply these regions, we carried out the first Large Program survey of HAeBe stars with two main goals: statistics on the geometry of these objects at the first astronomical unit scale and imaging their very close environment. The images reveal the environment, which is not polluted by the star and allows us to derive the best fit for the flux ratio and the spectral slope. We present the first images from this survey and the application of the imaging method on other astronomical objects.

  9. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  10. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  11. 2FHL: The Second Catalog of Hard Fermi-LAT Sources

    DOE PAGES

    Ackermann, M.; Ajello, M.; Atwood, W. B.; ...

    2016-01-14

    We present a catalog of sources detected above 50 GeV by the Fermi-Large Area Telescope (LAT) in 80 months of data. The newly delivered Pass 8 event-level analysis allows the detection and characterization of sources in the 50 GeV–2TeV energy range. In this energy band, Fermi - LAT has detected 360 sources, which constitute the second catalog of hard Fermi -LAT sources (2FHL). The improved angular resolution enables the precise localization of point sources (~1.'7 radius at 68 % C. L.) and the detection and characterization of spatially extended sources. We find that 86% of the sources can be associatedmore » with counterparts at other wavelengths, of which the majority (75%) are active galactic nuclei and the rest (11%) are Galactic sources. Only 25% of the 2FHL sources have been previously detected by Cherenkov telescopes, implying that the 2FHL provides a reservoir of candidates to be followed up at very high energies. This work closes the energy gap between the observations performed at GeV energies by Fermi -LAT on orbit and the observations performed at higher energies by Cherenkov telescopes from the ground.« less

  12. 2FHL: The Second Catalog of Hard Fermi-LAT Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ackermann, M.; Ajello, M.; Atwood, W. B.

    We present a catalog of sources detected above 50 GeV by the Fermi-Large Area Telescope (LAT) in 80 months of data. The newly delivered Pass 8 event-level analysis allows the detection and characterization of sources in the 50 GeV–2TeV energy range. In this energy band, Fermi - LAT has detected 360 sources, which constitute the second catalog of hard Fermi -LAT sources (2FHL). The improved angular resolution enables the precise localization of point sources (~1.'7 radius at 68 % C. L.) and the detection and characterization of spatially extended sources. We find that 86% of the sources can be associatedmore » with counterparts at other wavelengths, of which the majority (75%) are active galactic nuclei and the rest (11%) are Galactic sources. Only 25% of the 2FHL sources have been previously detected by Cherenkov telescopes, implying that the 2FHL provides a reservoir of candidates to be followed up at very high energies. This work closes the energy gap between the observations performed at GeV energies by Fermi -LAT on orbit and the observations performed at higher energies by Cherenkov telescopes from the ground.« less

  13. Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian; Elmore, Ryan; Getman, Dan

    This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).

  14. Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments

    DTIC Science & Technology

    2017-06-13

    reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must

  15. Gaining Ground in the Middle School Grades: Why Some Schools Do Better. A Large-Scale Study of Middle Grades Practices and Student Outcomes. The Informed Educator Series

    ERIC Educational Resources Information Center

    Educational Research Service, 2011

    2011-01-01

    This "Informed Educator" draws content from several reports as well as a PowerPoint presentation that describe findings from a study conducted by EdSource and its research partners from Stanford University and the American Institutes for Research. The project--Gaining Ground in the Middle Grades: Why Some Schools Do Better--focused on…

  16. Non-domestic phosphorus release in rivers during low-flow: Mechanisms and implications for sources identification

    NASA Astrophysics Data System (ADS)

    Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael

    2018-05-01

    A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.

  17. Structural concepts for large solar concentrators

    NASA Technical Reports Server (NTRS)

    Hedgepeth, John M.; Miller, Richard K.

    1987-01-01

    The Sunflower large solar concentrator, developed in the early 1970's, is a salient example of a high-efficiency concentrator. The newly emphasized needs for solar dynamic power on the Space Station and for large, lightweight thermal sources are outlined. Existing concepts for high efficiency reflector surfaces are examined with attention to accuracy needs for concentration rates of 1000 to 3000. Concepts using stiff reflector panels are deemed most likely to exhibit the long-term consistent accuracy necessary for low-orbit operation, particularly for the higher concentration ratios. Quantitative results are shown of the effects of surface errors for various concentration and focal-length diameter ratios. Cost effectiveness is discussed. Principal sources of high cost include the need for various dished panels for paraboloidal reflectors and the expense of ground testing and adjustment. A new configuration is presented addressing both problems, i.e., a deployable Pactruss backup structure with identical panels installed on the structure after deployment in space. Analytical results show that with reasonable pointing errors, this new concept is capable of concentration ratios greater than 2000.

  18. Microwave power - An energy transmission alternative for the year 2000

    NASA Technical Reports Server (NTRS)

    Nalos, E.; Sperber, R.

    1980-01-01

    Recent technological advances related to the feasibility of efficient RF-dc rectification make it likely that by the year 2000 the transmission of power through space will have become a practical reality. Proposals have been made to power helicopters, aircraft, balloons, and rockets remotely. Other proposals consider the transfer of power from point to point on earth via relay through space or a transmission of power from large power sources in space. Attention has also been given to possibilities regarding the transmission of power between various points in the solar system. An outline is provided of the microwave power transmission system envisaged for the solar power satellite, taking into account the transmitting antenna, the receiver on earth, aspects of beam formation and control, transmitter options, the receiving antenna design, and cost and efficiency considerations.

  19. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  20. Source Mechanism of the November 27, 1945 Tsunami in the Makran Subduction Zone

    NASA Astrophysics Data System (ADS)

    Heidarzadeh, M.; Satake, K.

    2011-12-01

    We study the source of the Makran tsunami of November 27, 1945 using newly-available tide gauge data from this large tsunami. Makran subduction zone at the northwestern Indian Ocean is the result of northward subduction of the Arabian plate beneath the Eurasian one at an approximate rate of 2 cm/year. Makran was the site of a large tsunamigenic earthquake in November 1945 (Mw 8.1) which caused widespread destruction as well as a death toll of about 4000 people at the coastal areas of the northwestern Indian Ocean. Although Makran experienced at least several large tsunamigenic earthquakes in the past several hundred years, the 1945 event is the only instrumentally-recorded tsunamigenic earthquake in the region, thus it is an important event in view of tsunami hazard assessment in the region. However, the source of this tsunami was poorly studied in the past as no tide gauge data was available for this tsunami to verify the tsunami source. In this study, we use two tide gauge data for the November 27, 1945 tsunami recorded at Mumbai and Karachi at approximate distances of 1100 and 350 km, respectively, away from the epicenter to constrain the tsunami source. Besides the two tide gauge data, that were recently published by Neetu et al. (2011, Natural Hazards), some reports about the arrival times and wave heights of tsunami at different locations both in the near-field (e.g., Pasni and Ormara) and far-field (e.g., Seychelles) are available which will be used to further constrain the source. In addition, the source mechanism of the 27 November 1945 tsunami determined using seismic data will be used as the start point for this study. Several reports indicate that a secondary source triggered by the main shock possibly contributed to the main plate boundary rupture during this large interplate earthquake, e.g., landslides or splay faults. For example, a runup height up to 12 m was reported in Pasni, the nearest coast to the tsunami source, which seems too hard to be linked with a plate boundary event with a maximum slip of around 6 m. Therefore, possible contribution of secondary tsunami sources also will be examined.

  1. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    PubMed

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  2. Quantification of Greenhouse Gas Emission Rates from strong Point Sources by Airborne IPDA-Lidar Measurements: Methodology and Experimental Results

    NASA Astrophysics Data System (ADS)

    Ehret, G.; Amediek, A.; Wirth, M.; Fix, A.; Kiemle, C.; Quatrevalet, M.

    2016-12-01

    We report on a new method and on the first demonstration to quantify emission rates from strong greenhouse gas (GHG) point sources using airborne Integrated Path Differential Absorption (IPDA) Lidar measurements. In order to build trust in the self-reported emission rates by countries, verification against independent monitoring systems is a prerequisite to check the reported budget. A significant fraction of the total anthropogenic emission of CO2 and CH4 originates from localized strong point sources of large energy production sites or landfills. Both are not monitored with sufficiently accuracy by the current observation system. There is a debate whether airborne remote sensing could fill in the gap to infer those emission rates from budgeting or from Gaussian plume inversion approaches, whereby measurements of the GHG column abundance beneath the aircraft can be used to constrain inverse models. In contrast to passive sensors, the use of an active instrument like CHARM-F for such emission verification measurements is new. CHARM-F is a new airborne IPDA-Lidar devised for the German research aircraft HALO for the simultaneous measurement of the column-integrated dry-air mixing ratio of CO2 and CH4 commonly denoted as XCO2 und XCH4, respectively. It has successfully been tested in a serious of flights over Central Europe to assess its performance under various reflectivity conditions and in a strongly varying topography like the Alps. The analysis of a methane plume measured in crosswind direction of a coal mine ventilation shaft revealed an instantaneous emission rate of 9.9 ± 1.7 kt CH4 yr-1. We discuss the methodology of our point source estimation approach and give an outlook on the CoMet field experiment scheduled in 2017 for the measurement of anthropogenic and natural GHG emissions by a combination of active and passive remote sensing instruments on research aircraft.

  3. Evolution of Extragalactic Radio Sources and Quasar/Galaxy Unification

    NASA Astrophysics Data System (ADS)

    Onah, C. I.; Ubachukwu, A. A.; Odo, F. C.; Onuchukwu, C. C.

    2018-04-01

    We use a large sample of radio sources to investigate the effects of evolution, luminosity selection and radio source orientation in explaining the apparent deviation of observed angular size - redshift (θ - z) relation of extragalactic radio sources (EGRSs) from the standard model. We have fitted the observed θ - z data with standard cosmological models based on a flat universe (Ω0 = 1). The size evolution of EGRSs has been described as luminosity, temporal and orientation-dependent in the form DP,z,Φ ≍ P±q(1 + z)-m sinΦ, with q=0.3, Φ=59°, m=-0.26 for radio galaxies and q=-0.5, Φ=33°, m=3.1 for radio quasars respectively. Critical points of luminosity, logPcrit=26.33 WHz-1 and logDc=2.51 kpc (316.23 kpc) of the present sample of radio sources were also observed. All the results were found to be consistent with the popular quasar/galaxy unification scheme.

  4. Mercury accumulation in snow on the Idaho National Engineering and Environmental Laboratory and surrounding region, southeast Idaho, USA

    USGS Publications Warehouse

    Susong, D.D.; Abbott, M.L.; Krabbenhoft, D.P.

    2003-01-01

    Snow was sampled and analyzed for total mercury (THg) on the Idaho National Engineering and Environmental Laboratory (INEEL) and surrounding region prior to the start-up of a large (9-11 g/h) gaseous mercury emission source. The objective was to determine the effects of the source on local and regional atmospheric deposition of mercury. Snow samples collected from 48 points on a polar grid near the source had THg concentrations that ranged from 4.71 to 27.26 ng/L; snow collected from regional background sites had THg concentrations that ranged from 0.89 to 16.61 ng/L. Grid samples had higher concentrations than the regional background sites, which was unexpected because the source was not operating yet. Emission of Hg from soils is a possible source of Hg in snow on the INEEL. Evidence from Hg profiles in snow and from unfiltered/filtered split samples supports this hypothesis. Ongoing work on the INEEL is investigating Hg fluxes from soils and snow.

  5. Stemflow: A literature review and the challenges ahead

    NASA Astrophysics Data System (ADS)

    José, Návar

    2013-04-01

    Stemflow is the rainfall portion that flows down to the ground via trunks or stems. It is a localized point source input of precipitation and solutes at the stem base, creating islands of soil moisture and fertility. It accounts on average for less than 5% of the gross rainfall but maximum figures can reach 3.5%, 11.3%, and 19% in tropical, temperate and semi-arid plant communities, respectively. However, recent research has shown these statistics could be twice as large in overstocked semi-arid, subtropical and temperate forest stands. Tree and shrub species funnel different stemflow depths and canopy features; diameter at breast height, top height, canopy area and volume, branch number and position; bark smoothness, etc. are the most frequent independent variables employed to explain the large intrinsic variation. The funneling ratio evaluates the hydro-pedological importance; calculated by the division of stemflow volume by the stem base area and by the rainfall depth. Statistics quite often show funneling ratios >> 1. Assessments of the stemflow infiltration area quite frequently show the islands of soil moisture are at least twice as large as the soil depth wetted by rainfall in the open and calculations are in agreement with several visual observations. Empirical evaluations quite often also show the potential contribution of stemflow to groundwater recharge and streamflow generation. However, assessments of the infiltration area and depth quite frequently deviate from visual observations conducted by dying pathways, showing roots are the most frequent sources of stemflow transport within soils. Should this be the case for most trees, then the number of roots and their position within the soil profile would help to better forecast the stemflow (rootflow) infiltration depth and the potential triggering of other hydrological processes. Current mathematical approaches challenge future research on stemflow and rootflow to better understand the hydro-eco-pedological importance of point source inputs of plant communities.

  6. UNCOVERING THE NUCLEUS CANDIDATE FOR NGC 253

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günthardt, G. I.; Camperi, J. A.; Agüero, M. P.

    2015-11-15

    NGC 253 is the nearest spiral galaxy with a nuclear starburst that becomes the best candidate for studying the relationship between starburst and active galactic nucleus activity. However, this central region is veiled by large amounts of dust, and it has been so far unclear which is the true dynamical nucleus to the point that there is no strong evidence that the galaxy harbors a supermassive black hole co-evolving with the starburst as was supposed earlier. Near-infrared (NIR) spectroscopy, especially NIR emission line analysis, could be advantageous in shedding light on the true nucleus identity. Using Flamingos-2 at Gemini Southmore » we have taken deep K-band spectra along the major axis of the central structure and through the brightest infrared source. In this work, we present evidence showing that the brightest NIR and mid-infrared source in the central region, already known as radio source TH7 and so far considered just a large stellar supercluster, in fact presents various symptoms of a genuine galactic nucleus. Therefore, it should be considered a valid nucleus candidate. Mentioning some distinctive aspects, it is the most massive compact infrared object in the central region, located at 2.″0 of the symmetry center of the galactic bar, as measured in the K-band emission. Moreover, our data indicate that this object is surrounded by a large circumnuclear stellar disk and it is also located at the rotation center of the large molecular gas disk of NGC 253. Furthermore, a kinematic residual appears in the H{sub 2} rotation curve with a sinusoidal shape consistent with an outflow centered in the candidate nucleus position. The maximum outflow velocity is located about 14 pc from TH7, which is consistent with the radius of a shell detected around the nucleus candidate, observed at 18.3 μm (Qa) and 12.8 μm ([Ne ii]) with T-ReCS. Also, the Brγ emission line profile shows a pronounced blueshift and this emission line also has the highest equivalent width at this position. All this evidence points to TH7 as the best candidate for the galactic nucleus of NGC 253.« less

  7. The New LOTIS Test Facility

    NASA Technical Reports Server (NTRS)

    Bell, R. M.; Cuzner, G.; Eugeni, C.; Hutchison, S. B.; Merrick, A. J.; Robins, G. C.; Bailey, S. H.; Ceurden, B.; Hagen, J.; Kenagy, K.; hide

    2008-01-01

    The Large Optical Test and Integration Site (LOTIS) at the Lockheed Martin Space Systems Company in Sunnyvale, CA is designed for the verification and testing of optical systems. The facility consists of an 88 foot temperature stabilized vacuum chamber that also functions as a class 10k vertical flow cleanroom. Many problems were encountered in the design and construction phases. The industry capability to build large chambers is very weak. Through many delays and extra engineering efforts, the final product is very good. With 11 Thermal Conditioning Units and precision RTD s, temperature is uniform and stable within 1oF, providing an ideal environment for precision optical testing. Within this chamber and atop an advanced micro-g vibration-isolation bench is the 6.5 meter diameter LOTIS Collimator and Scene Generator, LOTIS alignment and support equipment. The optical payloads are also placed on the vibration bench in the chamber for testing. This optical system is designed to operate in both air and vacuum, providing test imagery in an adaptable suite of visible/near infrared (VNIR) and midwave infrared (MWIR) point sources, and combined bandwidth visible-through-MWIR point sources, for testing of large aperture optical payloads. The heart of the system is the LOTIS Collimator, a 6.5m f/15 telescope, which projects scenes with wavefront errors <85 nm rms out to a 0.75 mrad field of view (FOV). Using field lenses, performance can be extended to a maximum field of view of 3.2 mrad. The LOTIS Collimator incorporates an extensive integrated wavefront sensing and control system to verify the performance of the system.

  8. Large-scale variability of wind erosion mass flux rates at Owens Lake 1. Vertical profiles of horizontal mass fluxes of wind-eroded particles with diameter greater than 50 μm

    USGS Publications Warehouse

    Gillette, Dale A.; Fryrear, D.W.; Xiao, Jing Bing; Stockton, Paul; Ono, Duane; Helm, Paula J.; Gill, Thomas E; Ley, Trevor

    1997-01-01

    A field experiment at Owens (dry) Lake, California, tested whether and how the relative profiles of airborne horizontal mass fluxes for >50-μm wind-eroded particles changed with friction velocity. The horizontal mass flux at almost all measured heights increased proportionally to the cube of friction velocity above an apparent threshold friction velocity for all sediment tested and increased with height except at one coarse-sand site where the relative horizontal mass flux profile did not change with friction velocity. Size distributions for long-time-averaged horizontal mass flux samples showed a saltation layer from the surface to a height between 30 and 50 cm, above which suspended particles dominate. Measurements from a large dust source area on a line parallel to the wind showed that even though the saltation flux reached equilibrium ∼650 m downwind of the starting point of erosion, weakly suspended particles were still input into the atmosphere 1567 m downwind of the starting point; thus the saltating fraction of the total mass flux decreased after 650 m. The scale length difference and ratio of 70/30 suspended mass flux to saltation mass flux at the farthest down wind sampling site confirm that suspended particles are very important for mass budgets in large source areas and that saltation mass flux can be a variable fraction of total horizontal mass flux for soils with a substantial fraction of <100-μm particles.

  9. Exploring the origin of a large cavity in Abell 1795 using deep Chandra observations

    NASA Astrophysics Data System (ADS)

    Walker, S. A.; Fabian, A. C.; Kosec, P.

    2014-12-01

    We examine deep stacked Chandra observations of the galaxy cluster Abell 1795 (over 700 ks) to study in depth a large (34 kpc radius) cavity in the X-ray emission. Curiously, despite the large energy required to form this cavity (4PV = 4 × 1060 erg), there is no obvious counterpart to the cavity on the opposite side of the cluster, which would be expected if it has formed due to jets from the central active galactic nucleus (AGN) inflating bubbles. There is also no radio emission associated with the cavity, and no metal enhancement or filaments between it and the brightest cluster galaxy, which are normally found for bubbles inflated by AGN which have risen from the core. One possibility is that this is an old ghost cavity, and that gas sloshing has dominated the distribution of metals around the core. Projection effects, particularly the long X-ray bright filament to the south-east, may prevent us from seeing the companion bubble on the opposite side of the cluster core. We calculate that such a companion bubble would easily have been able to uplift the gas in the southern filament from the core. Interestingly, it has recently been found that inside the cavity is a highly variable X-ray point source coincident with a small dwarf galaxy. Given the remarkable spatial correlation of this point source and the X-ray cavity, we explore the possibility that an outburst from this dwarf galaxy in the past could have led to the formation of the cavity, but find this to be an unlikely scenario.

  10. 2FHL- The Second Catalog of Hard Fermi-LAT Sources

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Gonzalez, J. Becerra; Bellazzini, R.; Bissaldi, E.; hide

    2016-01-01

    We present a catalog of sources detected above 50 GeV by the Fermi-Large Area Telescope (LAT) in 80 months of data. The newly delivered Pass8 event-level analysis allows the detection and characterization of sources in the 50 GeV-2 TeV energy range. In this energy band, Fermi-LAT has detected 360 sources, which constitute the second catalog of hard Fermi-LAT sources (2FHL). The improved angular resolution enables the precise localization of point sources (1.7 radius at 68% C.L.) and the detection and characterization of spatially extended sources. We find that 86% of the sources can be associated with counterparts at other wavelengths, of which the majority (75%) are active galactic nuclei and the rest (11%) are Galactic sources. Only 25% of the 2FHLsources have been previously detected by Cherenkov telescopes, implying that the 2FHL provides a reservoir of candidates to be followed up at very high energies. This work closes the energy gap between the observations performed at GeV energies by Fermi-LAT on orbit and the observations performed at higher energies byCherenkov telescopes from the ground.

  11. Monte Carlo studies of medium-size telescope designs for the Cherenkov Telescope Array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, M. D.; Jogler, T.; Dumm, J.

    In this paper, we present studies for optimizing the next generation of ground-based imaging atmospheric Cherenkov telescopes (IACTs). Results focus on mid-sized telescopes (MSTs) for CTA, detecting very high energy gamma rays in the energy range from a few hundred GeV to a few tens of TeV. We describe a novel, flexible detector Monte Carlo package, FAST (FAst Simulation for imaging air cherenkov Telescopes), that we use to simulate different array and telescope designs. The simulation is somewhat simplified to allow for efficient exploration over a large telescope design parameter space. We investigate a wide range of telescope performance parametersmore » including optical resolution, camera pixel size, and light collection area. In order to ensure a comparison of the arrays at their maximum sensitivity, we analyze the simulations with the most sensitive techniques used in the field, such as maximum likelihood template reconstruction and boosted decision trees for background rejection. Choosing telescope design parameters representative of the proposed Davies–Cotton (DC) and Schwarzchild–Couder (SC) MST designs, we compare the performance of the arrays by examining the gamma-ray angular resolution and differential point-source sensitivity. We further investigate the array performance under a wide range of conditions, determining the impact of the number of telescopes, telescope separation, night sky background, and geomagnetic field. We find a 30–40% improvement in the gamma-ray angular resolution at all energies when comparing arrays with an equal number of SC and DC telescopes, significantly enhancing point-source sensitivity in the MST energy range. Finally, we attribute the increase in point-source sensitivity to the improved optical point-spread function and smaller pixel size of the SC telescope design.« less

  12. Monte Carlo studies of medium-size telescope designs for the Cherenkov Telescope Array

    DOE PAGES

    Wood, M. D.; Jogler, T.; Dumm, J.; ...

    2015-06-07

    In this paper, we present studies for optimizing the next generation of ground-based imaging atmospheric Cherenkov telescopes (IACTs). Results focus on mid-sized telescopes (MSTs) for CTA, detecting very high energy gamma rays in the energy range from a few hundred GeV to a few tens of TeV. We describe a novel, flexible detector Monte Carlo package, FAST (FAst Simulation for imaging air cherenkov Telescopes), that we use to simulate different array and telescope designs. The simulation is somewhat simplified to allow for efficient exploration over a large telescope design parameter space. We investigate a wide range of telescope performance parametersmore » including optical resolution, camera pixel size, and light collection area. In order to ensure a comparison of the arrays at their maximum sensitivity, we analyze the simulations with the most sensitive techniques used in the field, such as maximum likelihood template reconstruction and boosted decision trees for background rejection. Choosing telescope design parameters representative of the proposed Davies–Cotton (DC) and Schwarzchild–Couder (SC) MST designs, we compare the performance of the arrays by examining the gamma-ray angular resolution and differential point-source sensitivity. We further investigate the array performance under a wide range of conditions, determining the impact of the number of telescopes, telescope separation, night sky background, and geomagnetic field. We find a 30–40% improvement in the gamma-ray angular resolution at all energies when comparing arrays with an equal number of SC and DC telescopes, significantly enhancing point-source sensitivity in the MST energy range. Finally, we attribute the increase in point-source sensitivity to the improved optical point-spread function and smaller pixel size of the SC telescope design.« less

  13. Study of Heavy Metals in a Wetland Area Adjacent to a Waste Disposal Site Near Resolute Bay, Canadian High Arctic

    NASA Astrophysics Data System (ADS)

    Lund, K. E.; Young, K. L.

    2004-05-01

    Heavy metal contamination in High Arctic systems is of growing concern. Studies have been conducted measuring long range and large point source pollutants, but little research has been done on small point sources such as municipal waste disposal sites. Many Arctic communities are coastal, and local people consume marine wildlife in which concentrations of heavy metals can accumulate. Waste disposal sites are often located in very close proximity to the coastline and leaching of these metals could contaminate food sources on a local scale. Cadmium and lead are the metals focussed on by this study, as the Northern Contaminants Program recognizes them as metals of concern. During the summer of 2003 a study was conducted near Resolute, Nunavut, Canada, to determine the extent of cadmium and lead leaching from a local dumpsite to an adjacent wetland. The ultimate fate of these contaminants is approximately 1 km downslope in the ocean. Transects covering an area of 0.3 km2 were established downslope from the point of disposal and water and soil samples were collected and analyzed for cadmium and lead. Only trace amounts of cadmium and lead were found in the water samples. In the soil samples, low uniform concentrations of cadmium were found that were slightly above background levels, except for adjacent to the point of waste input where higher concentrations were found. Lead soil concentrations were higher than cadmium and varied spatially with soil material and moisture. Overall, excessive amounts of cadmium and lead contamination do not appear to be entering the marine ecosystem. However, soil material and moisture should be considered when establishing waste disposal sites in the far north

  14. The Small Area Health Statistics Unit: a national facility for investigating health around point sources of environmental pollution in the United Kingdom.

    PubMed Central

    Elliott, P; Westlake, A J; Hills, M; Kleinschmidt, I; Rodrigues, L; McGale, P; Marshall, K; Rose, G

    1992-01-01

    STUDY OBJECTIVE--The Small Area Health Statistics Unit (SAHSU) was established at the London School of Hygiene and Tropical Medicine in response to a recommendation of the enquiry into the increased incidence of childhood leukaemia near Sellafield, the nuclear reprocessing plant in West Cumbria. The aim of this paper was to describe the Unit's methods for the investigation of health around point sources of environmental pollution in the United Kingdom. DESIGN--Routine data currently including deaths and cancer registrations are held in a large national database which uses a post code based retrieval system to locate cases geographically and link them to the underlying census enumeration districts, and hence to their populations at risk. Main outcome measures were comparison of observed/expected ratios (based on national rates) within bands delineated by concentric circles around point sources of environmental pollution located anywhere in Britain. MAIN RESULTS--The system is illustrated by a study of mortality from mesothelioma and asbestosis near the Plymouth naval dockyards during 1981-87. Within a 3 km radius of the docks the mortality rate for mesothelioma was higher than the national rate by a factor of 8.4, and that for asbestosis was higher by a factor of 13.6. CONCLUSIONS--SAHSU is a new national facility which is rapidly able to provide rates of mortality and cancer incidence for arbitrary circles drawn around any point in Britain. The example around Plymouth of mesothelioma and asbestosis demonstrates the ability of the system to detect an unusual excess of disease in a small locality, although in this case the findings are likely to be related to occupational rather than environmental exposure. PMID:1431704

  15. Calculating the n-point correlation function with general and efficient python code

    NASA Astrophysics Data System (ADS)

    Genier, Fred; Bellis, Matthew

    2018-01-01

    There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.

  16. Ghost imaging with bucket detection and point detection

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  17. Estimation of sulphur dioxide emission rate from a power plant based on the remote sensing measurement with an imaging-DOAS instrument

    NASA Astrophysics Data System (ADS)

    Chong, Jihyo; Kim, Young J.; Baek, Jongho; Lee, Hanlim

    2016-10-01

    Major anthropogenic sources of sulphur dioxide in the troposphere include point sources such as power plants and combustion-derived industrial sources. Spatially resolved remote sensing of atmospheric trace gases is desirable for better estimation and validation of emission from those sources. It has been reported that Imaging Differential Optical Absorption Spectroscopy (I-DOAS) technique can provide the spatially resolved two-dimensional distribution measurement of atmospheric trace gases. This study presents the results of I-DOAS observations of SO2 from a large power plant. The stack plume from the Taean coal-fired power plant was remotely sensed with an I-DOAS instrument. The slant column density (SCD) of SO2 was derived by data analysis of the absorption spectra of the scattered sunlight measured by an I-DOAS over the power plant stacks. Two-dimensional distribution of SO2 SCD was obtained over the viewing window of the I-DOAS instrument. The measured SCDs were converted to mixing ratios in order to estimate the rate of SO2 emission from each stack. The maximum mixing ratio of SO2 was measured to be 28.1 ppm with a SCD value of 4.15×1017 molecules/cm2. Based on the exit velocity of the plume from the stack, the emission rate of SO2 was estimated to be 22.54 g/s. Remote sensing of SO2 with an I-DOAS instrument can be very useful for independent estimation and validation of the emission rates from major point sources as well as area sources.

  18. Conceptual design of a stray light facility for Earth observation satellites

    NASA Astrophysics Data System (ADS)

    Stockman, Y.; Hellin, M. L.; Marcotte, S.; Mazy, E.; Versluys, J.; François, M.; Taccola, M.; Zuccaro Marchi, A.

    2017-11-01

    With the upcoming of TMA or FMA (Three or Four Mirrors Anastigmat) telescope design in Earth Observation system, stray light is a major contributor to the degradation of the image quality. Numerous sources of stray light can be identified and theoretically evaluated. Nevertheless in order to build a stray light model of the instrument, the Point Spread Function(s) of the instrument, i.e., the flux response of the instrument to the flux received at the instrument entrance from an infinite distant point source needs to be determined. This paper presents a conceptual design of a facility placed in a vacuum chamber to eliminate undesired air particles scatter light sources. The specification of the clean room class or vacuum will depend on the required rejection to be measured. Once the vacuum chamber is closed, the stray light level from the external environment can be considered as negligible. Inside the chamber a dedicated baffle design is required to eliminate undesired light generated by the set up itself e.g. retro reflected light away from the instrument under test. This implies blackened shrouds all around the specimen. The proposed illumination system is a 400 mm off axis parabolic mirror with a focal length of 2 m. The off axis design suppresses the problem of stray light that can be generated by the internal obstruction. A dedicated block source is evaluated in order to avoid any stray light coming from the structure around the source pinhole. Dedicated attention is required on the selection of the source to achieve the required large measurement dynamic.

  19. Heterogeneity of direct aftershock productivity of the main shock rupture

    NASA Astrophysics Data System (ADS)

    Guo, Yicun; Zhuang, Jiancang; Hirata, Naoshi; Zhou, Shiyong

    2017-07-01

    The epidemic type aftershock sequence (ETAS) model is widely used to describe and analyze the clustering behavior of seismicity. Instead of regarding large earthquakes as point sources, the finite-source ETAS model treats them as ruptures that extend in space. Each earthquake rupture consists of many patches, and each patch triggers its own aftershocks isotropically. We design an iterative algorithm to invert the unobserved fault geometry based on the stochastic reconstruction method. This model is applied to analyze the Japan Meteorological Agency (JMA) catalog during 1964-2014. We take six great earthquakes with magnitudes >7.5 after 1980 as finite sources and reconstruct the aftershock productivity patterns on each rupture surface. Comparing results from the point-source ETAS model, we find the following: (1) the finite-source model improves the data fitting; (2) direct aftershock productivity is heterogeneous on the rupture plane; (3) the triggering abilities of M5.4+ events are enhanced; (4) the background rate is higher in the off-fault region and lower in the on-fault region for the Tohoku earthquake, while high probabilities of direct aftershocks distribute all over the source region in the modified model; (5) the triggering abilities of five main shocks become 2-6 times higher after taking the rupture geometries into consideration; and (6) the trends of the cumulative background rate are similar in both models, indicating the same levels of detection ability for seismicity anomalies. Moreover, correlations between aftershock productivity and slip distributions imply that aftershocks within rupture faults are adjustments to coseismic stress changes due to slip heterogeneity.

  20. Initial conditions for critical Higgs inflation

    NASA Astrophysics Data System (ADS)

    Salvio, Alberto

    2018-05-01

    It has been pointed out that a large non-minimal coupling ξ between the Higgs and the Ricci scalar can source higher derivative operators, which may change the predictions of Higgs inflation. A variant, called critical Higgs inflation, employs the near-criticality of the top mass to introduce an inflection point in the potential and lower drastically the value of ξ. We here study whether critical Higgs inflation can occur even if the pre-inflationary initial conditions do not satisfy the slow-roll behavior (retaining translation and rotation symmetries). A positive answer is found: inflation turns out to be an attractor and therefore no fine-tuning of the initial conditions is necessary. A very large initial Higgs time-derivative (as compared to the potential energy density) is compensated by a moderate increase in the initial field value. These conclusions are reached by solving the exact Higgs equation without using the slow-roll approximation. This also allows us to consistently treat the inflection point, where the standard slow-roll approximation breaks down. Here we make use of an approach that is independent of the UV completion of gravity, by taking initial conditions that always involve sub-planckian energies.

  1. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  2. MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID

    EPA Science Inventory

    Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...

  3. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.

    2001-12-01

    In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.

  4. Preliminary report on the Black Thunder, Wyoming CTBT R and D experiment quicklook report: LLNL input from regional stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harben, P.E.; Glenn, L.A.

    This report presents a preliminary summary of the data recorded at three regional seismic stations from surface blasting at the Black Thunder Coal Mine in northeast Wyoming. The regional stations are part of a larger effort that includes many more seismic stations in the immediate vicinity of the mine. The overall purpose of this effort is to characterize the source function and propagation characteristics of large typical surface mine blasts. A detailed study of source and propagation features of conventional surface blasts is a prerequisite to attempts at discriminating this type of blasting activity from other sources of seismic events.more » The Black Thunder Seismic experiment is a joint verification effort to determine seismic source and path effects that result from very large, but routine ripple-fired surface mining blasts. Studies of the data collected will be for the purpose of understanding how the near-field and regional seismic waveforms from these surface mining blasts are similar to, and different from, point shot explosions and explosions at greater depth. The Black Hills Station is a Designated Seismic Station that was constructed for temporary occupancy by the Former Soviet Union seismic verification scientists in accordance with the Threshold Test Ban Treaty protocol.« less

  5. A stress ecology framework for comprehensive risk assessment of diffuse pollution.

    PubMed

    van Straalen, Nico M; van Gestel, Cornelis A M

    2008-12-01

    Environmental pollution is traditionally classified as either localized or diffuse. Local pollution comes from a point source that emits a well-defined cocktail of chemicals, distributed in the environment in the form of a gradient around the source. Diffuse pollution comes from many sources, small and large, that cause an erratic distribution of chemicals, interacting with those from other sources into a complex mixture of low to moderate concentrations over a large area. There is no good method for ecological risk assessment of such types of pollution. We argue that effects of diffuse contamination in the field must be analysed in the wider framework of stress ecology. A multivariate approach can be applied to filter effects of contaminants from the many interacting factors at the ecosystem level. Four case studies are discussed (1) functional and structural properties of terrestrial model ecosystems, (2) physiological profiles of microbial communities, (3) detritivores in reedfield litter, and (4) benthic invertebrates in canal sediment. In each of these cases the data were analysed by multivariate statistics and associations between ecological variables and the levels of contamination were established. We argue that the stress ecology framework is an appropriate assessment instrument for discriminating effects of pollution from other anthropogenic disturbances and naturally varying factors.

  6. A search for energy-dependence of the Kes 73/1E 1841-045 morphology in GeV

    NASA Astrophysics Data System (ADS)

    Yeung, P. K. H.

    2017-10-01

    While the Kes 73/1E 1841-045 system had been confirmed as an extended GeV source, whether its morphology depends on the photon energy or not deserves our further investigation. Adopting data collected by Fermi Large Area Telescope (LAT) again, we look into the extensions of this source in three energy bands individually: 0.3-1 GeV, 1-3 GeV and 3-200 GeV. We find that the 0.3-1 GeV morphology is point-like and is quite different from those in the other two bands, although we cannot robustly reject a unified morphology for the whole LAT band.

  7. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  8. The Infrared Properties of Sources Matched in the Wise All-Sky and Herschel ATLAS Surveys

    NASA Technical Reports Server (NTRS)

    Bond, Nicholas A.; Benford, Dominic J.; Gardner, Jonathan P.; Amblard, Alexandre; Fleuren, Simone; Blain, Andrew W.; Dunne, Loretta; Smith, Daniel J. B.; Maddox, Steve J.; Hoyos, Carlos; hide

    2012-01-01

    We describe the infrared properties of sources detected over approx 36 sq deg of sky in the GAMA 15-hr equatorial field, using data from both the Herschel Astrophysical Terahertz Large-Area Survey (HATLAS) and Wide-field Infrared Survey (WISE). With 5sigma point-source depths of 34 and 0.048 mJy at 250 micron and 3.4 micron, respectively, we are able to identify 50.6% of the H-ATLAS sources in the WISE survey, corresponding to a surface density of approx 630 deg(exp -2). Approximately two-thirds of these sources have measured spectroscopic or optical/near-IR photometric redshifts of z < 1. For sources with spectroscopic redshifts at z < 0.3, we find a linear correlation between the infrared luminosity at 3.4 micron and that at 250 micron, with +/- 50% scatter over approx 1.5 orders of magnitude in luminosity, approx 10(exp 9) - 10(exp 10.5) Solar Luminosity By contrast, the matched sources without previously measured redshifts (r approx > 20.5) have 250-350 micron flux density ratios that suggest either high-redshift galaxies (z approx > 1.5) or optically faint low-redshift galaxies with unusually low temperatures (T approx < 20). Their small 3.4-250 micron flux ratios favor a high-redshift galaxy population, as only the most actively star-forming galaxies at low redshift (e.g., Arp 220) exhibit comparable flux density ratios. Furthermore, we find a relatively large AGN fraction (approx 30%) in a 12 micron flux-limited subsample of H-ATLAS sources, also consistent with there being a significant population of high-redshift sources in the no-redshift sample

  9. The Infrared Properties of Sources Matched in the WISE All-Sky and Herschel Atlas Surveys

    NASA Technical Reports Server (NTRS)

    Bond, Nicholas A.; Benford, Dominic J.; Gardner, Jonathan P.; Eisenhardt, Peter; Amblard, Alexandre; Temi, Pasquale; Fleuren, Simone; Blain, Andrew W.; Dunne, Loretta; Smith, Daniel J.; hide

    2012-01-01

    We describe the infrared properties of sources detected over approx. 36 deg2 of sky in the GAMA 15-hr equatorial field, using data from both the Herschel Astrophysical Terahertz Large-Area Survey (H-ATLAS) and Wide-field Infrared Survey (WISE). With 5(sigma) point-source depths of 34 and 0.048 mJy at 250 microns and 3.4 microns, respectively, we are able to identify 50.6% of the H-ATLAS sources in the WISE survey, corresponding to a surface density of approx. 630 deg-2. Approximately two-thirds of these sources have measured spectroscopic or optical/near-IR photometric redshifts of z < 1. For sources with spectroscopic redshifts at z < 0.3, we find a linear correlation between the infrared luminosity at 3.4 microns and that at 250 microns, with +/-50% scatter over approx. 1.5 orders of magnitude in luminosity, approx. 10(exp 9) - 10(exp 10.5) Stellar Luminosity. By contrast, the matched sources without previously measured redshifts (r > or approx. 20.5) have 250-350 microns flux density ratios that suggest either high-redshift galaxies (z > or approx. 1.5) or optically faint low-redshift galaxies with unusually low temperatures (T < or approx. 20). Their small 3.4-250 microns flux ratios favor a high-redshift galaxy population, as only the most actively star-forming galaxies at low redshift (e.g., Arp 220) exhibit comparable flux density ratios. Furthermore, we find a relatively large AGN fraction (approx. 30%) in a 12 microns flux-limited subsample of H-ATLAS sources, also consistent with there being a significant population of high-redshift sources in the no-redshift sample.

  10. Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.

    PubMed

    Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe

    2010-01-01

    There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.

  11. A BRIGHT SUBMILLIMETER SOURCE IN THE BULLET CLUSTER (1E0657-56) FIELD DETECTED WITH BLAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rex, Marie; Devlin, Mark J.; Dicker, Simon R.

    2009-09-20

    We present the 250, 350, and 500 {mu}m detection of bright submillimeter emission in the direction of the Bullet Cluster measured by the Balloon-borne Large-Aperture Submillimeter Telescope (BLAST). The 500 {mu}m centroid is coincident with an AzTEC 1.1 mm point-source detection at a position close to the peak lensing magnification produced by the cluster. However, the 250 {mu}m and 350 {mu}m centroids are elongated and shifted toward the south with a differential shift between bands that cannot be explained by pointing uncertainties. We therefore conclude that the BLAST detection is likely contaminated by emission from foreground galaxies associated with themore » Bullet Cluster. The submillimeter redshift estimate based on 250-1100 {mu}m photometry at the position of the AzTEC source is z{sub phot} = 2.9{sup +0.6}{sub -0.3}, consistent with the infrared color redshift estimation of the most likely Infrared Array Camera counterpart. These flux densities indicate an apparent far-infrared (FIR) luminosity of L{sub FIR} = 2 x 10{sup 13} L {sub sun}. When the amplification due to the gravitational lensing of the cluster is removed, the intrinsic FIR luminosity of the source is found to be L{sub FIR} <= 10{sup 12} L{sub sun}, consistent with typical luminous infrared galaxies.« less

  12. Appraisal of an Array TEM Method in Detecting a Mined-Out Area Beneath a Conductive Layer

    NASA Astrophysics Data System (ADS)

    Li, Hai; Xue, Guo-qiang; Zhou, Nan-nan; Chen, Wei-ying

    2015-10-01

    The transient electromagnetic method has been extensively used for the detection of mined-out area in China for the past few years. In the cases that the mined-out area is overlain by a conductive layer, the detection of the target layer is difficult with a traditional loop source TEM method. In order to detect the target layer in this condition, this paper presents a newly developed array TEM method, which uses a grounded wire source. The underground current density distribution and the responses of the grounded wire source TEM configuration are modeled to demonstrate that the target layer is detectable in this condition. The 1D OCCAM inversion routine is applied to the synthetic single station data and common middle point gather. The result reveals that the electric source TEM method is capable of recovering the resistive target layer beneath the conductive overburden. By contrast, the conductive target layer cannot be recovered unless the distance between the target layer and the conductive overburden is large. Compared with inversion result of the single station data, the inversion of common middle point gather can better recover the resistivity of the target layer. Finally, a case study illustrates that the array TEM method is successfully applied in recovering a water-filled mined-out area beneath a conductive overburden.

  13. Agriculture is a major source of NO x pollution in California.

    PubMed

    Almaraz, Maya; Bai, Edith; Wang, Chao; Trousdell, Justin; Conley, Stephen; Faloona, Ian; Houlton, Benjamin Z

    2018-01-01

    Nitrogen oxides (NO x = NO + NO 2 ) are a primary component of air pollution-a leading cause of premature death in humans and biodiversity declines worldwide. Although regulatory policies in California have successfully limited transportation sources of NO x pollution, several of the United States' worst-air quality districts remain in rural regions of the state. Site-based findings suggest that NO x emissions from California's agricultural soils could contribute to air quality issues; however, a statewide estimate is hitherto lacking. We show that agricultural soils are a dominant source of NO x pollution in California, with especially high soil NO x emissions from the state's Central Valley region. We base our conclusion on two independent approaches: (i) a bottom-up spatial model of soil NO x emissions and (ii) top-down airborne observations of atmospheric NO x concentrations over the San Joaquin Valley. These approaches point to a large, overlooked NO x source from cropland soil, which is estimated to increase the NO x budget by 20 to 51%. These estimates are consistent with previous studies of point-scale measurements of NO x emissions from the soil. Our results highlight opportunities to limit NO x emissions from agriculture by investing in management practices that will bring co-benefits to the economy, ecosystems, and human health in rural areas of California.

  14. Interplanetary Scintillation studies with the Murchison Wide-field Array III: Comparison of source counts and densities for radio sources and their sub-arcsecond components at 162 MHz

    NASA Astrophysics Data System (ADS)

    Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.

    2018-06-01

    We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.

  15. Mercury Sources and Fate in the Gulf of Maine

    PubMed Central

    Sunderland, Elsie M.; Amirbahman, Aria; Burgess, Neil M.; Dalziel, John; Harding, Gareth; Jones, Stephen H.; Kamai, Elizabeth; Karagas, Margaret R.; Shi, Xun; Chen, Celia Y.

    2012-01-01

    Most human exposure to mercury (Hg) in the United States is from consuming marine fish and shellfish. The Gulf of Maine is a complex marine ecosystem comprised of twelve physioregions, including the Bay of Fundy, coastal shelf areas and deeper basins that contain highly productive fishing grounds. Here we review available data on spatial and temporal Hg trends to better understand the drivers of human and biological exposures. Atmospheric Hg deposition from U.S. and Canadian sources has declined since the mid-1990s in concert with emissions reductions but deposition from global sources has increased. Oceanographic circulation is the dominant source of total Hg inputs to the entire Gulf of Maine region (59%), followed by atmospheric deposition (28%), wastewater/industrial sources (8%), and rivers (5%). Resuspension of sediments increases MeHg inputs to overlying waters raising concerns about benthic trawling activities in shelf regions. In the near coastal areas, elevated sediment and mussel Hg levels are co-located in urban embayments and near large historical point sources. Temporal patterns in sentinel species (mussels and birds) have in some cases declined in response to localized point source mercury reductions but overall Hg trends do not show consistent declines. For example, levels of Hg have either declined or remained stable in eggs from four seabird species collected in the Bay of Fundy since 1972. Quantitatively linking Hg exposures from fish harvested from the Gulf of Maine to human health risks is challenging at this time because no data are available on the geographic origin of seafood consumed by coastal residents. In addition, there is virtually no information on Hg levels in commercial species for offshore regions of the Gulf of Maine where some of the most productive fisheries are located. Both of these data gaps should be priorities for future research. PMID:22572623

  16. Source apportionment of nitrogen and phosphorus from non-point source pollution in Nansi Lake Basin, China.

    PubMed

    Zhang, Bao-Lei; Cui, Bo-Hao; Zhang, Shu-Min; Wu, Quan-Yuan; Yao, Lei

    2018-05-03

    Nitrogen (N) and phosphorus (P) from non-point source (NPS) pollution in Nansi Lake Basin greatly influenced the water quality of Nansi Lake, which is the determinant factor for the success of East Route of South-North Water Transfer Project in China. This research improved Johnes export coefficient model (ECM) by developing a method to determine the export coefficients of different land use types based on the hydrological and water quality data. Taking NPS total nitrogen (TN) and total phosphorus (TP) as the study objects, this study estimated the contributions of different pollution sources and analyzed their spatial distributions based on the improved ECM. The results underlined that the method for obtaining output coefficients of land use types using hydrology and water quality data is feasible and accurate, and is suitable for the study of NPS pollution at large-scale basins. The average output structure of NPS TN from land use, rural breeding and rural life is 33.6, 25.9, and 40.5%, and the NPS TP is 31.6, 43.7, and 24.7%, respectively. Especially, dry land was the main land use source for both NPS TN and TP pollution, with the contributed proportions of 81.3 and 81.8% respectively. The counties of Zaozhuang, Tengzhou, Caoxian, Yuncheng, and Shanxian had higher contribution rates and the counties of Dingtao, Juancheng, and Caoxian had the higher load intensities for both NPS TN and TP pollution. The results of this study allowed for an improvement in the understanding of the pollution source contribution and enabled researchers and planners to focus on the most important sources and regions of NPS pollution.

  17. A prototype of the procedure of strong ground motion prediction for intraslab earthquake based on characterized source model

    NASA Astrophysics Data System (ADS)

    Iwata, T.; Asano, K.; Sekiguchi, H.

    2011-12-01

    We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.

  18. Low-mass X-ray binaries and gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Lasota, J. P.; Frank, J.; King, A. R.

    1992-01-01

    More than twenty years after their discovery, the nature of gamma-ray burst sources (GRBs) remains mysterious. The results from BATSE experiment aboard the Compton Observatory show however that most of the sources of gamma-ray bursts cannot be distributed in the galactic disc. The possibility that a small fraction of sites of gamma-ray bursts is of galactic disc origin cannot however be excluded. We point out that large numbers of neutron-star binaries with orbital periods of 10 hr and M dwarf companions of mass 0.2-0.3 solar mass are a natural result of the evolution of low-mass X-ray binaries (LMXBs). The numbers and physical properties of these systems suggest that some gamma-ray burst sources may be identified with this endpoint of LMXB evolution. We suggest an observational test of this hypothesis.

  19. Direct Measurement of Wave Kernels in Time-Distance Helioseismology

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.

    2006-01-01

    Solar f-mode waves are surface-gravity waves which propagate horizontally in a thin layer near the photosphere with a dispersion relation approximately that of deep water waves. At the power maximum near 3 mHz, the wavelength of 5 Mm is large enough for various wave scattering properties to be observable. Gizon and Birch (2002,ApJ,571,966)h ave calculated kernels, in the Born approximation, for the sensitivity of wave travel times to local changes in damping rate and source strength. In this work, using isolated small magnetic features as approximate point-sourc'e scatterers, such a kernel has been measured. The observed kernel contains similar features to a theoretical damping kernel but not for a source kernel. A full understanding of the effect of small magnetic features on the waves will require more detailed modeling.

  20. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  1. Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2012-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  2. Sources of Wind Variability at a Single Station in Complex Terrain During Tropical Cyclone Passage

    DTIC Science & Technology

    2013-12-01

    Mesoscale Prediction System CPA Closest point of approach ET Extratropical transition FNMOC Fleet Numerical Meteorology and Oceanography Center...forecasts. However, 2 the TC forecast tracks and warnings they issue necessarily focus on the large-scale structure of the storm , and are not...winds at one station. Also, this technique is a storm - centered forecast and even if the grid spacing is on order of one kilometer, it is unlikely

  3. Active Optical Devices and Applications. Volume 228

    DTIC Science & Technology

    1980-04-01

    Research Center, Minneapolis, Minnesota 55413 Abstract In this paper a control engineer’s point of view of the Large Space Structure (LSS) problem is...CASSIOPEIA SUPERNOVA REMNANT GALAXIES IN VIRGO CLUSTER QUASAR 3C273 CRAB PULSAR Figure 2. A collage of images of X-ray sources obtained with the HEAO...Telescope. Yet ST will not be able to study vari- able stars (primary distance indicators) to the Virgo cluster of galaxies and beyond. This cluster is

  4. The Joint Milli-Arcsecond Pathfinder Survey (JMAPS): Mission Overview and Attitude Sensing Applications

    DTIC Science & Technology

    2009-01-01

    employs a set of reference targets such as asteroids that are relatively numer- ous, more or less uniformly distributed around the Sun, and relatively...point source-like. Just such a population exists—90 km-class asteroids . There are about 100 of these objects with relatively well-know orbits...These are main belt objects that are approximately evenly distributed around the sun. They are large enough to be quasi-spherical in nature, and as a

  5. Influence of rainfall data scarcity on non-point source pollution prediction: Implications for physically based models

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Xu, Jiajia; Wang, Guobo; Liu, Hongbin; Zhai, Limei; Li, Shuang; Sun, Cheng; Shen, Zhenyao

    2018-07-01

    Hydrological and non-point source pollution (H/NPS) predictions in ungagged basins have become the key problem for watershed studies, especially for those large-scale catchments. However, few studies have explored the comprehensive impacts of rainfall data scarcity on H/NPS predictions. This study focused on: 1) the effects of rainfall spatial scarcity (by removing 11%-67% of stations based on their locations) on the H/NPS results; and 2) the impacts of rainfall temporal scarcity (10%-60% data scarcity in time series); and 3) the development of a new evaluation method that incorporates information entropy. A case study was undertaken using the Soil and Water Assessment Tool (SWAT) in a typical watershed in China. The results of this study highlighted the importance of critical-site rainfall stations that often showed greater influences and cross-tributary impacts on the H/NPS simulations. Higher missing rates above a certain threshold as well as missing locations during the wet periods resulted in poorer simulation results. Compared to traditional indicators, information entropy could serve as a good substitute because it reflects the distribution of spatial variability and the development of temporal heterogeneity. This paper reports important implications for the application of Distributed Hydrological Models and Semi-distributed Hydrological Models, as well as for the optimal design of rainfall gauges among large basins.

  6. Large Dataset of Acute Oral Toxicity Data Created for Testing ...

    EPA Pesticide Factsheets

    Acute toxicity data is a common requirement for substance registration in the US. Currently only data derived from animal tests are accepted by regulatory agencies, and the standard in vivo tests use lethality as the endpoint. Non-animal alternatives such as in silico models are being developed due to animal welfare and resource considerations. We compiled a large dataset of oral rat LD50 values to assess the predictive performance currently available in silico models. Our dataset combines LD50 values from five different sources: literature data provided by The Dow Chemical Company, REACH data from eChemportal, HSDB (Hazardous Substances Data Bank), RTECS data from Leadscope, and the training set underpinning TEST (Toxicity Estimation Software Tool). Combined these data sources yield 33848 chemical-LD50 pairs (data points), with 23475 unique data points covering 16439 compounds. The entire dataset was loaded into a chemical properties database. All of the compounds were registered in DSSTox and 59.5% have publically available structures. Compounds without a structure in DSSTox are currently having their structures registered. The structural data will be used to evaluate the predictive performance and applicable chemical domains of three QSAR models (TIMES, PROTOX, and TEST). Future work will combine the dataset with information from ToxCast assays, and using random forest modeling, assess whether ToxCast assays are useful in predicting acute oral toxicity. Pre

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.

    Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less

  8. Microbial source tracking: a tool for identifying sources of microbial contamination in the food chain.

    PubMed

    Fu, Ling-Lin; Li, Jian-Rong

    2014-01-01

    The ability to trace fecal indicators and food-borne pathogens to the point of origin has major ramifications for food industry, food regulatory agencies, and public health. Such information would enable food producers and processors to better understand sources of contamination and thereby take corrective actions to prevent transmission. Microbial source tracking (MST), which currently is largely focused on determining sources of fecal contamination in waterways, is also providing the scientific community tools for tracking both fecal bacteria and food-borne pathogens contamination in the food chain. Approaches to MST are commonly classified as library-dependent methods (LDMs) or library-independent methods (LIMs). These tools will have widespread applications, including the use for regulatory compliance, pollution remediation, and risk assessment. These tools will reduce the incidence of illness associated with food and water. Our aim in this review is to highlight the use of molecular MST methods in application to understanding the source and transmission of food-borne pathogens. Moreover, the future directions of MST research are also discussed.

  9. The Third EGRET Catalog of High-Energy Gamma-Ray Sources

    NASA Technical Reports Server (NTRS)

    Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.; hide

    1998-01-01

    The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog and its supplement, this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.

  10. DISCRIMINATION OF NATURAL AND NON-POINT SOURCE EFFECTS FROM ANTHROGENIC EFFECTS AS REFLECTED IN BENTHIC STATE IN THREE ESTUARIES IN NEW ENGLAND

    EPA Science Inventory

    In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...

  11. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  12. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  13. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  14. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  15. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  16. AKARI Infrared Camera Survey of the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Shimonishi, Takashi; Kato, Daisuke; Ita, Yoshifusa; Onaka, Takashi

    2015-08-01

    The Large Magellanic Cloud (LMC) is one of the closest external galaxies to the Milky Way and has been playing a central role in various fields of modern astronomy and astrophysics. We conducted an unbiased near- to mid-infrared imaging and spectroscopic survey of the LMC with the infrared satellite AKARI. An area of about 10 square degrees of the LMC was observed by five imaging bands (each centered at 3.2, 7, 11, 15, and 24 micron) and the low-resolution slitless prism spectroscopy mode (2--5 micron, R~20) equipped with the Infrared Camera on board AKARI. Based on the data obtained in the survey, we constructed the photometric and spectroscopic catalogues of point sources in the LMC. The photometric catalogue includes about 650,000, 90,000, 49,000, 17,000, 7,000 sources at 3.2, 7, 11, 15, and 24 micron, respectively (Ita et al. 2008, PASJ, 60, 435; Kato et al. 2012, AJ, 144, 179), while the spectroscopic catalogue includes 1,757 sources (Shimonishi et al. 2013, AJ, 145, 32). Both catalogs are publicly released and available through a website (AKARI Observers Page, http://www.ir.isas.ac.jp/AKARI/Observation/). The catalog includes various infrared sources such as young stellar objects, asymptotic giant branch stars, giants/supergiants, and many other cool or dust-enshrouded stars. A large number of near-infrared spectral data, coupled with complementary broadband photometric data, allow us to investigate infrared spectral features of sources by comparison with their spectral energy distributions. Combined use of the present AKARI LMC catalogues with other infrared catalogues such as SAGE and HERITAGE possesses scientific potential that can be applied to various astronomical studies. In this presentation, we report the details of the AKARI photometric and spectroscopic catalogues of the LMC.

  17. Occurrence of Surface Water Contaminations: An Overview

    NASA Astrophysics Data System (ADS)

    Shahabudin, M. M.; Musa, S.

    2018-04-01

    Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.

  18. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturgeon, Richard W.

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less

  19. Finite-fault inversion of the Mw 5.9 2012 Emilia-Romagna earthquake (Northern Italy) using aftershocks as near-field Green's function approximations

    NASA Astrophysics Data System (ADS)

    Causse, Mathieu; Cultrera, Giovanna; Herrero, André; Courboulex, Françoise; Schiappapietra, Erika; Moreau, Ludovic

    2017-04-01

    On May 29, 2012 occurred a Mw 5.9 earthquake in the Emilia-Romagna region (Po Plain) on a thrust fault system. This shock, as well as hundreds of aftershocks, were recorded by 10 strong motion stations located less than 10 km away from the rupture plane, with 4 stations located within the surface rupture projection. The Po Plain is a very large EW trending syntectonic alluvial basin, delimited by the Alps and Apennines chains to the North and South. The Plio-Quaternary sedimentary sequence filling the Po Plain is characterized by an uneven thickness, ranging from several thousands of meters to a few tens of meters. This particular context results especially in a resonance basin below 1 Hz and strong surface waves, which makes it particularly difficult to model wave propagation and hence to obtain robust images of the rupture propagation. This study proposes to take advantage of the large set of recorded aftershocks, considered as point sources, to model wave propagation. Due to the heterogeneous distribution of the aftershocks on the fault plane, an interpolation technique is proposed to compute an approximation of the Green's function between each fault point and each strong motion station in the frequency range [0.2-1Hz]. We then use a Bayesian inversion technique (Monte Carlo Markov Chain algorithm) to obtain images of the rupture propagation from the strong motion data. We propose to retrieve the slip distribution by inverting the final slip value at some control points, which are allowed to move on the fault plane, and by interpolating the slip value between these points. We show that the use of 5 control points to describe the slip, coupled with the hypothesis of spatially constant rupture velocity and rise-time (that is 18 free source parameters), results in a good level of fit with the data. This indicates that despite their complexity, the strong motion data can be properly modeled up to 1 Hz using a relatively simple rupture. The inversion results also reveal that the rupture propagated slowly, at a speed of about 45% of the shear wave velocity.

  20. Source Process of the 2007 Niigata-ken Chuetsu-oki Earthquake Derived from Near-fault Strong Motion Data

    NASA Astrophysics Data System (ADS)

    Aoi, S.; Sekiguchi, H.; Morikawa, N.; Ozawa, T.; Kunugi, T.; Shirasaka, M.

    2007-12-01

    The 2007 Niigata-ken Chuetsu-oki earthquake occurred on July 16th, 2007, 10:13 JST. We performed a multi- time window linear waveform inversion analysis (Hartzell and Heaton, 1983) to estimate the rupture process from the near fault strong motion data of 14 stations from K-NET, KiK-net, F-net, JMA, and Niigata prefecture. The fault plane for the mainshock has not been clearly determined yet from the aftershock distribution, so that we performed two waveform inversions for north-west dipping fault (Model A) and south-east dipping fault (Model B). Their strike, dip, and rake are set to those of the moment tensor solutions by F-net. Fault plane model of 30 km length by 24 km width is set to cover aftershock distribution within 24 hours after the mainshock. Theoretical Green's functions were calculated by the discrete wavenumber method (Bouchon, 1981) and the R/T matrix method (Kennett, 1983) with the different stratified medium for each station based on the velocity structure including the information form the reflection survey and borehole logging data. Convolution of moving dislocation was introduced to represent the rupture propagation in an each subfault (Sekiguchi et al., 2002). The observed acceleration records were integrated into velocity except of F-net velocity data, and bandpass filtered between 0.1 and 1.0 Hz. We solved least-squared equation to obtain slip amount of each time window on each subfault to minimize squared residual of the waveform fitting between observed and synthetic waveforms. Both models provide moment magnitudes of 6.7. Regarding Model A, we obtained large slip in the south-west deeper part of the rupture starting point, which is close to Kashiwazaki-city. The second or third velocity pulses of observed velocity waveforms seem to be composed of slip from the asperity. Regarding Model B, we obtained large slip in the southwest shallower part of the rupture starting point, which is also close to Kashiwazaki-city. In both models, we found small slip near the rupture starting point, and largest slip at about ten kilometer in the south-west of the rupture starting point with the maximum slip of 2.3 and 2.5 m for Models A and B, respectively. The difference of the residual between observed and synthetic waveforms for both models is not significant, therefore it is difficult to conclude which fault plane is appropriate to explain. The estimated large-slip regions in the inverted source models with the Models A and B are located near the cross point of the two fault plane models, which should have similar radiation pattern. This situation may be one of the reasons why judgment of the fault plane orientation is such difficult. We need careful examinations not only strong motion data but also geodetic data to further explore the fault orientation and the source process of this earthquake.

  1. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  2. A double-observer approach for estimating detection probability and abundance from point counts

    USGS Publications Warehouse

    Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Fallon, F.W.; Fallon, J.E.; Heglund, P.J.

    2000-01-01

    Although point counts are frequently used in ornithological studies, basic assumptions about detection probabilities often are untested. We apply a double-observer approach developed to estimate detection probabilities for aerial surveys (Cook and Jacobson 1979) to avian point counts. At each point count, a designated 'primary' observer indicates to another ('secondary') observer all birds detected. The secondary observer records all detections of the primary observer as well as any birds not detected by the primary observer. Observers alternate primary and secondary roles during the course of the survey. The approach permits estimation of observer-specific detection probabilities and bird abundance. We developed a set of models that incorporate different assumptions about sources of variation (e.g. observer, bird species) in detection probability. Seventeen field trials were conducted, and models were fit to the resulting data using program SURVIV. Single-observer point counts generally miss varying proportions of the birds actually present, and observer and bird species were found to be relevant sources of variation in detection probabilities. Overall detection probabilities (probability of being detected by at least one of the two observers) estimated using the double-observer approach were very high (>0.95), yielding precise estimates of avian abundance. We consider problems with the approach and recommend possible solutions, including restriction of the approach to fixed-radius counts to reduce the effect of variation in the effective radius of detection among various observers and to provide a basis for using spatial sampling to estimate bird abundance on large areas of interest. We believe that most questions meriting the effort required to carry out point counts also merit serious attempts to estimate detection probabilities associated with the counts. The double-observer approach is a method that can be used for this purpose.

  3. Highly macroscopically degenerated single-point ground states as source of specific heat capacity anomalies in magnetic frustrated systems

    NASA Astrophysics Data System (ADS)

    Jurčišinová, E.; Jurčišin, M.

    2018-04-01

    Anomalies of the specific heat capacity are investigated in the framework of the exactly solvable antiferromagnetic spin- 1 / 2 Ising model in the external magnetic field on the geometrically frustrated tetrahedron recursive lattice. It is shown that the Schottky-type anomaly in the behavior of the specific heat capacity is related to the existence of unique highly macroscopically degenerated single-point ground states which are formed on the borders between neighboring plateau-like ground states. It is also shown that the very existence of these single-point ground states with large residual entropies predicts the appearance of another anomaly in the behavior of the specific heat capacity for low temperatures, namely, the field-induced double-peak structure, which exists, and should be observed experimentally, along with the Schottky-type anomaly in various frustrated magnetic system.

  4. Criticality of the random field Ising model in and out of equilibrium: A nonperturbative functional renormalization group description

    NASA Astrophysics Data System (ADS)

    Balog, Ivan; Tarjus, Gilles; Tissier, Matthieu

    2018-03-01

    We show that, contrary to previous suggestions based on computer simulations or erroneous theoretical treatments, the critical points of the random-field Ising model out of equilibrium, when quasistatically changing the applied source at zero temperature, and in equilibrium are not in the same universality class below some critical dimension dD R≈5.1 . We demonstrate this by implementing a nonperturbative functional renormalization group for the associated dynamical field theory. Above dD R, the avalanches, which characterize the evolution of the system at zero temperature, become irrelevant at large distance, and hysteresis and equilibrium critical points are then controlled by the same fixed point. We explain how to use computer simulation and finite-size scaling to check the correspondence between in and out of equilibrium criticality in a far less ambiguous way than done so far.

  5. Nonequilibrium phase coexistence and criticality near the second explosion limit of hydrogen combustion

    NASA Astrophysics Data System (ADS)

    Newcomb, Lucas B.; Alaghemandi, Mohammad; Green, Jason R.

    2017-07-01

    While hydrogen is a promising source of clean energy, the safety and optimization of hydrogen technologies rely on controlling ignition through explosion limits: pressure-temperature boundaries separating explosive behavior from comparatively slow burning. Here, we show that the emergent nonequilibrium chemistry of combustible mixtures can exhibit the quantitative features of a phase transition. With stochastic simulations of the chemical kinetics for a model mechanism of hydrogen combustion, we show that the boundaries marking explosive domains of kinetic behavior are nonequilibrium critical points. Near the pressure of the second explosion limit, these critical points terminate the transient coexistence of dynamical phases—one that autoignites and another that progresses slowly. Below the critical point temperature, the chemistry of these phases is indistinguishable. In the large system limit, the pseudo-critical temperature converges to the temperature of the second explosion limit derived from mass-action kinetics.

  6. Origin and Fate of Phosphorus In The Seine River Watershed

    NASA Astrophysics Data System (ADS)

    Némery, J.; Garnier, J.; Billen, G.; Meybeck, M.; Morel, C.

    In the large man impacted river systems, like the Seine basin, phosphorus originates from both diffuse sources, i.e. runoff on agricultural soils and point sources generally well localised and quantified, i.e. industrial and domestic sewage. On the basis of our biogeochemical model of the Seine river ecological functioning (RIVERSTRAHLER: Billen et al., 1994; Garnier et al., 1995), a reduction of eutrophication and a better oxygenation of the larger streamorders could only be obtained by reducing P-point sources by 80 %. We are considering here P-sources, pathways and budgets through a nested approach from the Blaise sub-basin (600 km2, cattle breeding), the Grand Morin (1200 km, agricultural), the Marne (12 000 km, agricultural/urbanized) and the whole Seine catchment (65 000 km2, 17 M inhabitants). Particulate P mobility is also studied by the 32P isotopic exchange method developed in agronomy (Fardeau, 1993; Morel, 1995). The progressive reduction of polyphosphate content in washing powders and phosphorus retention in sewage treatment plants over the last ten years has led to a marked relative decrease of P point sources with regards to the diffuse ones, particularly for Paris megacity (10 M inhabitants). Major P inputs on the Marne basin are fertilizers (17 000 106 g P y-1) and 400 106 g P y-1 for treated wastewaters. Riverine output (900 106 g P y-1) is 1/3 associated to suspended matter (TSS) and is 2/3 as P-PO43-. Most fertilizer P is therefore retained on soils and exported in food supply. First results on P mobility show an important proportion of potentially remobilised P from TSS used for phytoplankton development (streamorder 5 to 8) and from deposited sediment used by macrophytes (streamorder 2 to 5). These kinetics of P exchange will improve the P sub-model in the whole basin ecological model.

  7. 44 GHZ CLASS I METHANOL (CH{sub 3}OH) MASER SURVEY IN THE GALACTIC CENTER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McEwen, Bridget C.; Pihlström, Ylva M.; Sjouwerman, Loránt O.

    2016-12-01

    We report on a large 44 GHz (7{sub 0}–6{sub 1} A {sup +}) methanol (CH{sub 3}OH) maser survey of the Galactic Center. The Karl G. Jansky Very Large Array was used to search for CH{sub 3}OH maser emission covering a large fraction of the region around Sgr A. In 25 pointings, over 300 CH{sub 3}OH maser sources (>10 σ ) were detected. The majority of the maser sources have a single peak emission spectrum with line of sight velocities that range from about −13 to 72 km s{sup −1}. Most maser sources were found to have velocities around 35−55 kmmore » s{sup −1}, closely following velocities of neighboring interacting molecular clouds (MCs). The full width at half-maximum of each individual spectral feature is very narrow (∼0.85 km s{sup −1} on average). In the north, where Sgr A East is known to be interacting with the 50 km s{sup −1} MC, more than 100 44 GHz CH{sub 3}OH masers were detected. In addition, three other distinct concentrations of masers were found, which appear to be located closer to the interior of the interacting MCs. It is possible that a subset of masers is associated with star formation, although conclusive evidence is lacking.« less

  8. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    NASA Astrophysics Data System (ADS)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks, some persisting since 2013 (Zazzeri et al., 2015), and seasonality and spatial variability of livestock emissions. Importantly existing significant gas leaks close to proposed fracking sites have been characterized so that any emissions to atmosphere with a different isotopic signature will be detected. Zazzeri, G., Atm. Env. 110, 151-162 (2015); Zazzeri, G., Sci. Rep. 7, 4854 (2017).

  9. NO(x) Concentrations in the Upper Troposphere as a Result of Lightning

    NASA Technical Reports Server (NTRS)

    Penner, Joyce E.

    1998-01-01

    Upper tropospheric NO(x) controls, in part, the distribution of ozone in this greenhouse-sensitive region of the atmosphere. Many factors control NO(x) in this region. As a result it is difficult to assess uncertainties in anthropogenic perturbations to NO from aircraft, for example, without understanding the role of the other major NO(x) sources in the upper troposphere. These include in situ sources (lightning, aircraft), convection from the surface (biomass burning, fossil fuels, soils), stratospheric intrusions, and photochemical recycling from HNO3. This work examines the separate contribution to upper tropospheric "primary" NO(x) from each source category and uses two different chemical transport models (CTMS) to represent a range of possible atmospheric transport. Because aircraft emissions are tied to particular pressure altitudes, it is important to understand whether those emissions are placed in the model stratosphere or troposphere and to assess whether the models can adequately differentiate stratospheric air from tropospheric air. We examine these issues by defining a point-by-point "tracer tropopause" in order to differentiate stratosphere from troposphere in terms of NO(x) perturbations. Both models predict similar zonal average peak enhancements of primary NO(x) due to aircraft (approx. = 10-20 parts per trillion by volume (pptv) in both January and July); however, the placement of this peak is primarily in a region of large stratospheric influence in one model and centered near the level evaluated as the tracer tropopause in the second. Below the tracer tropopause, both models show negligible NO(x) derived directly from the stratospheric source. Also, they predict a typically low background of 1 - 20 pptv NO(x) when tropospheric HNO3 is constrained to be 100 pptv of HNO3. The two models calculate large differences in the total background NO(x) (defined as the source of NO(x) from lightning + stratosphere + surface + HNO3) when using identical loss frequencies for NO(x). This difference is primarily due to differing treatments of vertical transport. An improved diagnosis of this transport that is relevant to NO(x) requires either measurements of a surface-based tracer with a substantially shorter lifetime than Rn-222 or diagnosis and mapping of tracer correlations with different source signatures. Because of differences in transport by the two models we cannot constrain the source of NO(x) from lightning through comparison of average model concentrations with observations of NO(x).

  10. Point and Condensed Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian

    2017-01-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.

  11. SUZAKU X-RAY IMAGING OF THE EXTENDED LOBE IN THE GIANT RADIO GALAXY NGC 6251 ASSOCIATED WITH THE FERMI-LAT SOURCE 2FGL J1629.4+8236

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takeuchi, Y.; Kataoka, J.; Takahashi, Y.

    2012-04-10

    We report the results of a Suzaku X-ray imaging study of NGC 6251, a nearby giant radio galaxy with intermediate FR I/II radio properties. Our pointing direction was centered on the {gamma}-ray emission peak recently discovered with the Fermi Large Area Telescope (LAT) around the position of the northwest (NW) radio lobe 15 arcmin offset from the nucleus. After subtracting two 'off-source' pointings adjacent to the radio lobe and removing possible contaminants in the X-ray Imaging Spectrometer field of view, we found significant residual X-ray emission most likely diffuse in nature. The spectrum of the excess X-ray emission is wellmore » fitted by a power law with a photon index {Gamma} = 1.90 {+-} 0.15 and a 0.5-8 keV flux of 4 Multiplication-Sign 10{sup -13} erg cm{sup -2} s{sup -1}. We interpret this diffuse X-ray emission component as being due to inverse Compton upscattering of the cosmic microwave background photons by ultrarelativistic electrons within the lobe, with only a minor contribution from the beamed emission of the large-scale jet. Utilizing archival radio data for the source, we demonstrate by means of broadband spectral modeling that the {gamma}-ray flux of the Fermi-LAT source 2FGL J1629.4+8236 may well be accounted for by the high-energy tail of the inverse Compton continuum of the lobe. Thus, this claimed association of {gamma}-rays from the NW lobe of NGC 6251, together with the recent Fermi-LAT imaging of the extended lobes of Centaurus A, indicates that particles may be efficiently (re-)accelerated up to ultrarelativistic energies within extended radio lobes of nearby radio galaxies in general.« less

  12. Self-Similar Spin Images for Point Cloud Matching

    NASA Astrophysics Data System (ADS)

    Pulido, Daniel

    The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.

  13. Assessment of uncertainties of an aircraft-based mass balance approach for quantifying urban greenhouse gas emissions

    NASA Astrophysics Data System (ADS)

    Cambaliza, M. O. L.; Shepson, P. B.; Caulton, D. R.; Stirm, B.; Samarov, D.; Gurney, K. R.; Turnbull, J.; Davis, K. J.; Possolo, A.; Karion, A.; Sweeney, C.; Moser, B.; Hendricks, A.; Lauvaux, T.; Mays, K.; Whetstone, J.; Huang, J.; Razlivanov, I.; Miles, N. L.; Richardson, S. J.

    2014-09-01

    Urban environments are the primary contributors to global anthropogenic carbon emissions. Because much of the growth in CO2 emissions will originate from cities, there is a need to develop, assess, and improve measurement and modeling strategies for quantifying and monitoring greenhouse gas emissions from large urban centers. In this study the uncertainties in an aircraft-based mass balance approach for quantifying carbon dioxide and methane emissions from an urban environment, focusing on Indianapolis, IN, USA, are described. The relatively level terrain of Indianapolis facilitated the application of mean wind fields in the mass balance approach. We investigate the uncertainties in our aircraft-based mass balance approach by (1) assessing the sensitivity of the measured flux to important measurement and analysis parameters including wind speed, background CO2 and CH4, boundary layer depth, and interpolation technique, and (2) determining the flux at two or more downwind distances from a point or area source (with relatively large source strengths such as solid waste facilities and a power generating station) in rapid succession, assuming that the emission flux is constant. When we quantify the precision in the approach by comparing the estimated emissions derived from measurements at two or more downwind distances from an area or point source, we find that the minimum and maximum repeatability were 12 and 52%, with an average of 31%. We suggest that improvements in the experimental design can be achieved by careful determination of the background concentration, monitoring the evolution of the boundary layer through the measurement period, and increasing the number of downwind horizontal transect measurements at multiple altitudes within the boundary layer.

  14. High-z X-ray Obscured Quasars in Galaxies with Extreme Mid-IR/Optical Colors

    NASA Astrophysics Data System (ADS)

    Piconcelli, E.; Lanzuisi, G.; Fiore, F.; Feruglio, C.; Vignali, C.; Salvato, M.; Grappioni, C.

    2009-05-01

    Extreme Optical/Mid-IR color cuts have been used to uncover a population of dust-enshrouded, mid-IR luminous galaxies at high redshifts. Several lines of evidence point towards the presence of an heavily absorbed, possibly Compton-thick quasar at the heart of these systems. Nonetheless, the X-ray spectral properties of these intriguing sources still remain largely unexplored. Here we present an X-ray spectroscopic study of a large sample of 44 extreme dust-obscured galaxies (EDOGs) with F24 μm/FR>2000 and F24 μm>1.3 mJy selected from a 6 deg2 region in the SWIRE fields. The application of our selection criteria to a wide area survey has been capable of unveiling a population of X-ray luminous, absorbed z>1 quasars which is mostly missed in the traditional optical/X-ray surveys performed so far. Advances in the understanding of the X-ray properties of these recently-discovered sources by Simbol-X observations will be also discussed.

  15. Interferometry meets the third and fourth dimensions in galaxies

    NASA Astrophysics Data System (ADS)

    Trimble, Virginia

    2015-02-01

    Radio astronomy began with one array (Jansky's) and one paraboloid of revolution (Reber's) as collecting areas and has now reached the point where a large number of facilities are arrays of paraboloids, each of which would have looked enormous to Reber in 1932. In the process, interferometry has contributed to the counting of radio sources, establishing superluminal velocities in AGN jets, mapping of sources from the bipolar cow shape on up to full grey-scale and colored images, determining spectral energy distributions requiring non-thermal emission processes, and much else. The process has not been free of competition and controversy, at least partly because it is just a little difficult to understand how earth-rotation, aperture-synthesis interferometry works. Some very important results, for instance the mapping of HI in the Milky Way to reveal spiral arms, warping, and flaring, actually came from single moderate-sized paraboloids. The entry of China into the radio astronomy community has given large (40-110 meter) paraboloids a new lease on life.

  16. Studies of large amplitude Alfvén waves and wave-wave interactions in LAPD

    NASA Astrophysics Data System (ADS)

    Carter, T. A.; Brugman, B.; Auerbach, D. W.

    2006-10-01

    Electromagnetic turbulence is thought to play an important role in plasmas in astrophysical settings (e.g. the interstellar medium, accretion disks) and in the laboratory (e.g. transport in magnetic fusion devices). From a weak turbulence point of view, nonlinear interactions between shear Alfvén waves are fundamental to the turbulent energy cascade in magnetic turbulence. An overview of experiments on large amplitude shear Alfvén waves in the Large Plasma Device (LAPD) will be presented. Large amplitude Alfvén waves (δB/B ˜1%) are generated either using a resonant cavity or loop antennas. Properties of Alfvén waves generated by these sources will be discussed, along with evidence of heating, background density modification and electron acceleration by the waves. An overview of experiments on wave-wave interactions will be given along with a discussion of future directions.

  17. Detection and modeling of the acoustic perturbation produced by the launch of the Space Shuttle using the Global Positioning System

    NASA Astrophysics Data System (ADS)

    Bowling, T. J.; Calais, E.; Dautermann, T.

    2010-12-01

    Rocket launches are known to produce infrasonic pressure waves that propagate into the ionosphere where coupling between electrons and neutral particles induces fluctuations in ionospheric electron density observable in GPS measurements. We have detected ionospheric perturbations following the launch of space shuttle Atlantis on 11 May 2009 using an array of continually operating GPS stations across the Southeastern coast of the United States and in the Caribbean. Detections are prominent to the south of the westward shuttle trajectory in the area of maximum coupling between the acoustic wave and Earth’s magnetic field, move at speeds consistent with the speed of sound, and show coherency between stations covering a large geographic range. We model the perturbation as an explosive source located at the point of closest approach between the shuttle path and each sub-ionospheric point. The neutral pressure wave is propagated using ray tracing, resultant changes in electron density are calculated at points of intersection between rays and satellite-to-reciever line-of-sight, and synthetic integrated electron content values are derived. Arrival times of the observed and synthesized waveforms match closely, with discrepancies related to errors in the apriori sound speed model used for ray tracing. Current work includes the estimation of source location and energy.

  18. Design and expected performance of a novel hybrid detector for very-high-energy gamma-ray astrophysics

    NASA Astrophysics Data System (ADS)

    Assis, P.; Barres de Almeida, U.; Blanco, A.; Conceição, R.; D'Ettorre Piazzoli, B.; De Angelis, A.; Doro, M.; Fonte, P.; Lopes, L.; Matthiae, G.; Pimenta, M.; Shellard, R.; Tomé, B.

    2018-05-01

    Current detectors for Very-High-Energy γ-ray astrophysics are either pointing instruments with a small field of view (Cherenkov telescopes), or large field-of-view instruments with relatively large energy thresholds (extensive air shower detectors). In this article, we propose a new hybrid extensive air shower detector sensitive in an energy region starting from about 100 GeV. The detector combines a small water-Cherenkov detector, able to provide a calorimetric measurement of shower particles at ground, with resistive plate chambers which contribute significantly to the accurate shower geometry reconstruction. A full simulation of this detector concept shows that it is able to reach better sensitivity than any previous gamma-ray wide field-of-view experiment in the sub-TeV energy region. It is expected to detect with a 5σ significance a source fainter than the Crab Nebula in one year at 100 GeV and, above 1 TeV a source as faint as 10% of it. As such, this instrument is suited to detect transient phenomena making it a very powerful tool to trigger observations of variable sources and to detect transients coupled to gravitational waves and gamma-ray bursts.

  19. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.

    2002-01-01

    The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.

  20. Vector image method for the derivation of elastostatic solutions for point sources in a plane layered medium. Part 1: Derivation and simple examples

    NASA Technical Reports Server (NTRS)

    Fares, Nabil; Li, Victor C.

    1986-01-01

    An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.

  1. 40 CFR 414.100 - Applicability; description of the subcategory of direct discharge point sources that do not use...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... subcategory of direct discharge point sources that do not use end-of-pipe biological treatment. 414.100... AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Do Not Use End-of-Pipe Biological Treatment § 414.100 Applicability; description of the subcategory of...

  2. Better Assessment Science Integrating Point and Non-point Sources (BASINS)

    EPA Pesticide Factsheets

    Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.

  3. Traveling wavefront solutions to nonlinear reaction-diffusion-convection equations

    NASA Astrophysics Data System (ADS)

    Indekeu, Joseph O.; Smets, Ruben

    2017-08-01

    Physically motivated modified Fisher equations are studied in which nonlinear convection and nonlinear diffusion is allowed for besides the usual growth and spread of a population. It is pointed out that in a large variety of cases separable functions in the form of exponentially decaying sharp wavefronts solve the differential equation exactly provided a co-moving point source or sink is active at the wavefront. The velocity dispersion and front steepness may differ from those of some previously studied exact smooth traveling wave solutions. For an extension of the reaction-diffusion-convection equation, featuring a memory effect in the form of a maturity delay for growth and spread, also smooth exact wavefront solutions are obtained. The stability of the solutions is verified analytically and numerically.

  4. Device for modular input high-speed multi-channel digitizing of electrical data

    DOEpatents

    VanDeusen, Alan L.; Crist, Charles E.

    1995-09-26

    A multi-channel high-speed digitizer module converts a plurality of analog signals to digital signals (digitizing) and stores the signals in a memory device. The analog input channels are digitized simultaneously at high speed with a relatively large number of on-board memory data points per channel. The module provides an automated calibration based upon a single voltage reference source. Low signal noise at such a high density and sample rate is accomplished by ensuring the A/D converters are clocked at the same point in the noise cycle each time so that synchronous noise sampling occurs. This sampling process, in conjunction with an automated calibration, yields signal noise levels well below the noise level present on the analog reference voltages.

  5. Point Source All Sky

    NASA Image and Video Library

    2003-03-27

    This panoramic view encompasses the entire sky as seen by Two Micron All-Sky Survey. The measured brightnesses of half a billion stars (points) have been combined into colors representing three distinct wavelengths of infrared light: blue at 1.2 microns, green at 1.6 microns, and red at 2.2 microns. This image is centered on the core of our own Milky Way galaxy, toward the constellation of Sagittarius. The reddish stars seemingly hovering in the middle of the Milky Way's disc -- many of them never observed before -- trace the densest dust clouds in our galaxy. The two faint smudges seen in the lower right quadrant are our neighboring galaxies, the Small and Large Magellanic Clouds. http://photojournal.jpl.nasa.gov/catalog/PIA04250

  6. Magnitudes, nature, and effects of point and nonpoint discharges in the Chattahoochee River basin, Atlanta to West Point Dam, Georgia

    USGS Publications Warehouse

    Stamer, J.K.; Cherry, R.N.; Faye, R.E.; Kleckner, R.L.

    1978-01-01

    On an average annual basis and during the storm period of March 12-15, 1976, nonpoint-source loads for most constituents were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 miles downstream from Atlanta, GA. Most of the nonpoint-source constituent loads in the Atlanta to Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads and about 70 percent of the dissolved phosphorus loads at Whitesburg. During a low-flow period, June 1-2, 1977, five municipal point-sources contributed 63 percent of the ultimate biochemical oxygen demand, and 97 percent of the ammonium nitrogen loads at the Franklin station, at the upstream end of West Point Lake. Dissolved-oxygen concentrations of 4.1 to 5.0 milligrams per liter occurred in a 22-mile reach of the river downstream from Atlanta due about equally to nitrogenous and carbonaceous oxygen demands. The heat load from two thermoelectric powerplants caused a decrease in dissolved-oxygen concentration of about 0.2 milligrams per liter. Phytoplankton concentrations in West Point Lake, about 70 miles downstream from Atlanta, could exceed three million cells per millimeter during extended low-flow periods in the summer with present point-source phosphorus loads. (Woodard-USGS)

  7. Unidentified point sources in the IRAS minisurvey

    NASA Technical Reports Server (NTRS)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  8. A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex

    NASA Astrophysics Data System (ADS)

    Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter

    2017-11-01

    Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO :  ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard deviation of 0.4 ppb ppm-1. This is partly due to improved wind fields (increase in R2 of 0.10) but also due to improved point source representation (increase in R2 of 0.05) and dilution (increase in R2 of 0.07). Based on our analysis we conclude that a plume model with detailed and accurate dispersion parameters adds substantially to top-down monitoring of greenhouse gas emissions in urban environments with large point source contributions within a ˜ 10 km radius from the observation sites.

  9. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh.

    PubMed

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.

  10. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.

  11. Characterization of the bending stiffness of large space structure joints

    NASA Technical Reports Server (NTRS)

    Wu, K. Chauncey

    1989-01-01

    A technique for estimating the bending stiffness of large space structure joints is developed and demonstrated for an erectable joint concept. Experimental load-deflection data from a three-point bending test was used as input to solve a closed-form expression for the joint bending stiffness which was derived from linear beam theory. Potential error sources in both the experimental and analytical procedures are identified and discussed. The bending stiffness of a mechanically preloaded erectable joint is studied at three applied moments and seven joint orientations. Using this technique, the joint bending stiffness was bounded between 6 and 17 percent of the bending stiffness of the graphite/epoxy strut member.

  12. Viewpoints: A New Computer Program for Interactive Exploration of Large Multivariate Space Science and Astrophysics Data.

    NASA Astrophysics Data System (ADS)

    Levit, Creon; Gazis, P.

    2006-06-01

    The graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform (windows, linux, Apple OSX) application which leverages some of the power latent in the GPU to enable smooth interactive exploration and analysis of large high-dimensional data using a variety of classical and recent techniques. The targeted application area is the interactive analysis of complex, multivariate space science and astrophysics data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 10^6-10^8.

  13. Efficient estimation and large-scale evaluation of lateral chromatic aberration for digital image forensics

    NASA Astrophysics Data System (ADS)

    Gloe, Thomas; Borowka, Karsten; Winkler, Antje

    2010-01-01

    The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.

  14. An Automated Method for Navigation Assessment for Earth Survey Sensors Using Island Targets

    NASA Technical Reports Server (NTRS)

    Patt, F. S.; Woodward, R. H.; Gregg, W. W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalogue of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean colour sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  15. Automated navigation assessment for earth survey sensors using island targets

    NASA Technical Reports Server (NTRS)

    Patt, Frederick S.; Woodward, Robert H.; Gregg, Watson W.

    1997-01-01

    An automated method has been developed for performing navigation assessment on satellite-based Earth sensor data. The method utilizes islands as targets which can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to source, a reference catalog of island locations, and a robust pattern-matching algorithm for island identification. The algorithms were developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), an ocean color sensor. This method will allow navigation error statistics to be automatically generated for large numbers of points, supporting analysis over large spatial and temporal ranges.

  16. Multiple Auto-Adapting Color Balancing for Large Number of Images

    NASA Astrophysics Data System (ADS)

    Zhou, X.

    2015-04-01

    This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.

  17. Multiband super-resolution imaging of graded-index photonic crystal flat lens

    NASA Astrophysics Data System (ADS)

    Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun

    2018-05-01

    Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.

  18. Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen

    2016-06-01

    We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.

  19. The XMM-SERVS survey: new XMM-Newton point-source catalog for the XMM-LSS field

    NASA Astrophysics Data System (ADS)

    Chen, C.-T. J.; Brandt, W. N.; Luo, B.; Ranalli, P.; Yang, G.; Alexander, D. M.; Bauer, F. E.; Kelson, D. D.; Lacy, M.; Nyland, K.; Tozzi, P.; Vito, F.; Cirasuolo, M.; Gilli, R.; Jarvis, M. J.; Lehmer, B. D.; Paolillo, M.; Schneider, D. P.; Shemmer, O.; Smail, I.; Sun, M.; Tanaka, M.; Vaccari, M.; Vignali, C.; Xue, Y. Q.; Banerji, M.; Chow, K. E.; Häußler, B.; Norris, R. P.; Silverman, J. D.; Trump, J. R.

    2018-04-01

    We present an X-ray point-source catalog from the XMM-Large Scale Structure survey region (XMM-LSS), one of the XMM-Spitzer Extragalactic Representative Volume Survey (XMM-SERVS) fields. We target the XMM-LSS region with 1.3 Ms of new XMM-Newton AO-15 observations, transforming the archival X-ray coverage in this region into a 5.3 deg2 contiguous field with uniform X-ray coverage totaling 2.7 Ms of flare-filtered exposure, with a 46 ks median PN exposure time. We provide an X-ray catalog of 5242 sources detected in the soft (0.5-2 keV), hard (2-10 keV), and/or full (0.5-10 keV) bands with a 1% expected spurious fraction determined from simulations. A total of 2381 new X-ray sources are detected compared to previous source catalogs in the same area. Our survey has flux limits of 1.7 × 10-15, 1.3 × 10-14, and 6.5 × 10-15 erg cm-2 s-1 over 90% of its area in the soft, hard, and full bands, respectively, which is comparable to those of the XMM-COSMOS survey. We identify multiwavelength counterpart candidates for 99.9% of the X-ray sources, of which 93% are considered as reliable based on their matching likelihood ratios. The reliabilities of these high-likelihood-ratio counterparts are further confirmed to be ≈97% reliable based on deep Chandra coverage over ≈5% of the XMM-LSS region. Results of multiwavelength identifications are also included in the source catalog, along with basic optical-to-infrared photometry and spectroscopic redshifts from publicly available surveys. We compute photometric redshifts for X-ray sources in 4.5 deg2 of our field where forced-aperture multi-band photometry is available; >70% of the X-ray sources in this subfield have either spectroscopic or high-quality photometric redshifts.

  20. The resolution of point sources of light as analyzed by quantum detection theory

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1972-01-01

    The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  1. A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Miller, M. Coleman

    The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less

  2. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults.

    PubMed

    Ouwehand, Kim; van Gog, Tamara; Paas, Fred

    2016-10-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.

  3. Emissions Estimation from Satellite Retrievals: a Review of Current Capability

    NASA Technical Reports Server (NTRS)

    Streets, David; Canty, Timothy; Carmichael, Gregory R.; deFoy, Benjamin; Dickerson, Russell R.; Duncan, Bryan N.; Edwards, David P.; Haynes, John A.; Henze, Daven K.; Houyoux, Marc R.; hide

    2013-01-01

    Since the mid-1990s a new generation of Earth-observing satellites has been able to detect tropospheric air pollution at increasingly high spatial and temporal resolution. Most primary emitted species can be measured by one or more of the instruments. This review article addresses the question of how well we can relate the satellite measurements to quantification of primary emissions and what advances are needed to improve the usability of the measurements by U.S. air quality managers. Built on a comprehensive literature review and comprising input by both satellite experts and emission inventory specialists, the review identifies several targets that seem promising: large point sources of NOx and SO2, species that are difficult to measure by other means (NH3 and CH4, for example), area sources that cannot easily be quantified by traditional bottom-up methods (such as unconventional oil and gas extraction, shipping, biomass burning, and biogenic sources), and the temporal variation of emissions (seasonal, diurnal, episodic). Techniques that enhance the usefulness of current retrievals (data assimilation, oversampling, multi-species retrievals, improved vertical profiles, etc.) are discussed. Finally, we point out the value of having new geostationary satellites like GEO-CAPE and TEMPO over North America that could provide measurements at high spatial (few km) and temporal (hourly) resolution.

  4. Atmospheric mercury emissions from mine wastes and surrounding geologically enriched terrains

    USGS Publications Warehouse

    Gustin, M.S.; Coolbaugh, M.F.; Engle, M.A.; Fitzgerald, B.C.; Keislar, R.E.; Lindberg, S.E.; Nacht, D.M.; Quashnick, J.; Rytuba, J.J.; Sladek, C.; Zhang, H.; Zehner, R.E.

    2003-01-01

    Waste rock and ore associated with Hg, precious and base metal mining, and their surrounding host rocks are typically enriched in mercury relative to natural background concentrations (<0.1 ??g Hg g-1). Mercury fluxes to the atmosphere from mineralized areas can range from background rates (0-15 ng m-2 h-1) to tens of thousands of ng m-2 h-1. Mercury enriched substrate constitutes a long-term source of mercury to the global atmospheric mercury pool. Mercury emissions from substrate are influenced by light, temperature, precipitation, and substrate mercury concentration, and occur during the day and night. Light-enhanced emissions are driven by two processes: desorption of elemental mercury accumulated at the soil:air interface, and photo reduction of mercury containing phases. To determine the need for and effectiveness of regulatory controls on short-lived anthropogenic point sources the contribution of mercury from geologic non-point sources to the atmospheric mercury pool needs to be quantified. The atmospheric mercury contribution from small areas of mining disturbance with relatively high mercury concentrations are, in general, less than that from surrounding large areas of low levels of mercury enrichment. In the arid to semi-arid west-ern United States volatilization is the primary means by which mercury is released from enriched sites.

  5. Emissions estimation from satellite retrievals: A review of current capability

    NASA Astrophysics Data System (ADS)

    Streets, David G.; Canty, Timothy; Carmichael, Gregory R.; de Foy, Benjamin; Dickerson, Russell R.; Duncan, Bryan N.; Edwards, David P.; Haynes, John A.; Henze, Daven K.; Houyoux, Marc R.; Jacob, Daniel J.; Krotkov, Nickolay A.; Lamsal, Lok N.; Liu, Yang; Lu, Zifeng; Martin, Randall V.; Pfister, Gabriele G.; Pinder, Robert W.; Salawitch, Ross J.; Wecht, Kevin J.

    2013-10-01

    Since the mid-1990s a new generation of Earth-observing satellites has been able to detect tropospheric air pollution at increasingly high spatial and temporal resolution. Most primary emitted species can be measured by one or more of the instruments. This review article addresses the question of how well we can relate the satellite measurements to quantification of primary emissions and what advances are needed to improve the usability of the measurements by U.S. air quality managers. Built on a comprehensive literature review and comprising input by both satellite experts and emission inventory specialists, the review identifies several targets that seem promising: large point sources of NOx and SO2, species that are difficult to measure by other means (NH3 and CH4, for example), area sources that cannot easily be quantified by traditional bottom-up methods (such as unconventional oil and gas extraction, shipping, biomass burning, and biogenic sources), and the temporal variation of emissions (seasonal, diurnal, episodic). Techniques that enhance the usefulness of current retrievals (data assimilation, oversampling, multi-species retrievals, improved vertical profiles, etc.) are discussed. Finally, we point out the value of having new geostationary satellites like GEO-CAPE and TEMPO over North America that could provide measurements at high spatial (few km) and temporal (hourly) resolution.

  6. Using the Mean Shift Algorithm to Make Post Hoc Improvements to the Accuracy of Eye Tracking Data Based on Probable Fixation Locations

    DTIC Science & Technology

    2010-08-01

    astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate

  7. Large area soft x-ray collimator to facilitate x-ray optics testing

    NASA Technical Reports Server (NTRS)

    Espy, Samuel L.

    1994-01-01

    The first objective of this program is to design a nested conical foil x-ray optic which will collimate x-rays diverging from a point source. The collimator could then be employed in a small, inexpensive x-ray test stand which would be used to test various x-ray optics and detector systems. The second objective is to demonstrate the fabrication of the x-ray reflectors for this optic using lacquer-smoothing and zero-stress electroforming techniques.

  8. Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies

    PubMed Central

    Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.

    2009-01-01

    Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601

  9. Reduction of intensity variations on the absorbers of ideal flux concentrators

    NASA Technical Reports Server (NTRS)

    Greenman, P.

    1980-01-01

    Large nonuniformities occur in the instantaneous distribution of flux on the absorber of an ideal light concentrator when it is illuminated by a point source such as the sun. These nonuniformities may be reduced by texturing the reflecting surface with small distortions. Such distortions will also be effective if used in the primary reflector of a two-stage concentrator. Data on a model compound parabolic concentrator are presented. The suitability of such concentrators for use by spacecraft is mentioned.

  10. The Chandra Source Catalog: Source Properties and Data Products

    NASA Astrophysics Data System (ADS)

    Rots, Arnold; Evans, Ian N.; Glotfelty, Kenny J.; Primini, Francis A.; Zografou, Panagoula; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is breaking new ground in several areas. There are two aspects that are of particular interest to the users: its evolution and its contents. The CSC will be a living catalog that becomes richer, bigger, and better in time while still remembering its state at each point in time. This means that users will be able to take full advantage of new additions to the catalog, while retaining the ability to back-track and return to what was extracted in the past. The CSC sheds the limitations of flat-table catalogs. Its sources will be characterized by a large number of properties, as usual, but each source will also be associated with its own specific data products, allowing users to perform mini custom analysis on the sources. Source properties fall in the spatial (position, extent), photometric (fluxes, count rates), spectral (hardness ratios, standard spectral fits), and temporal (variability probabilities) domains, and are all accompanied by error estimates. Data products cover the same coordinate space and include event lists, images, spectra, and light curves. In addition, the catalog contains data products covering complete observations: event lists, background images, exposure maps, etc. This work is supported by NASA contract NAS8-03060 (CXC).

  11. Libration Point Navigation Concepts Supporting Exploration Vision

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Folta, David C.; Moreau, Michael C.; Gramling, Cheryl J.

    2004-01-01

    Farquhar described several libration point navigation concepts that would appear to support NASA s current exploration vision. One concept is a Lunar Relay Satellite operating in the vicinity of Earth-Moon L2, providing Earth-to-lunar far-side and long- range surface-to-surface navigation and communications capability. Reference [ 1] lists several advantages of such a system in comparison to a lunar orbiting relay satellite constellation. Among these are one or two vs. many satellites for coverage, simplified acquisition and tracking due to very low relative motion, much longer contact times, and simpler antenna pointing. An obvious additional advantage of such a system is that uninterrupted links to Earth avoid performing critical maneuvers "in the blind." Another concept described is the use of Earth-Moon L1 for lunar orbit rendezvous, rather than low lunar orbit as was done for Apollo. This rendezvous technique would avoid large plane change and high fuel cost associated with high latitude landing sites and long stay times. Earth-Moon L1 also offers unconstrained launch windows from the lunar surface. Farquhar claims this technique requires only slightly higher fuel cost than low lunar orbit rendezvous for short-stay equatorial landings. Farquhar also describes an Interplanetary Transportation System that would use libration points as terminals for an interplanetary shuttle. This approach would offer increased operational flexibility in terms of launch windows, rendezvous, aborts, etc. in comparison to elliptical orbit transfers. More recently, other works including Folta[3] and Howell[4] have shown that patching together unstable trajectories departing Earth-Moon libration points with stable trajectories approaching planetary libration points may also offer lower overall fuel costs than elliptical orbit transfers. Another concept Farquhar described was a Deep Space Relay at Earth-Moon IA and/or L5 that would serve as a high data rate optical navigation and communications relay satellite. The advantages in comparison to a geosynchronous relay are minimal Earth occultation, distance from large noise sources on Earth, easier pointing due to smaller relative velocity, and a large baseline for interferometry if both L4 and L5 are used.

  12. Wavelet transform analysis of the small-scale X-ray structure of the cluster Abell 1367

    NASA Technical Reports Server (NTRS)

    Grebeney, S. A.; Forman, W.; Jones, C.; Murray, S.

    1995-01-01

    We have developed a new technique based on a wavelet transform analysis to quantify the small-scale (less than a few arcminutes) X-ray structure of clusters of galaxies. We apply this technique to the ROSAT position sensitive proportional counter (PSPC) and Einstein high-resolution imager (HRI) images of the central region of the cluster Abell 1367 to detect sources embedded within the diffuse intracluster medium. In addition to detecting sources and determining their fluxes and positions, we show that the wavelet analysis allows a characterization of the sources extents. In particular, the wavelet scale at which a given source achieves a maximum signal-to-noise ratio in the wavelet images provides an estimate of the angular extent of the source. To account for the widely varying point response of the ROSAT PSPC as a function of off-axis angle requires a quantitative measurement of the source size and a comparison to a calibration derived from the analysis of a Deep Survey image. Therefore, we assume that each source could be described as an isotropic two-dimensional Gaussian and used the wavelet amplitudes, at different scales, to determine the equivalent Gaussian Full Width Half-Maximum (FWHM) (and its uncertainty) appropriate for each source. In our analysis of the ROSAT PSPC image, we detect 31 X-ray sources above the diffuse cluster emission (within a radius of 24 min), 16 of which are apparently associated with cluster galaxies and two with serendipitous, background quasars. We find that the angular extents of 11 sources exceed the nominal width of the PSPC point-spread function. Four of these extended sources were previously detected by Bechtold et al. (1983) as 1 sec scale features using the Einstein HRI. The same wavelet analysis technique was applied to the Einstein HRI image. We detect 28 sources in the HRI image, of which nine are extended. Eight of the extended sources correspond to sources previously detected by Bechtold et al. Overall, using both the PSPC and the HRI observations, we detect 16 extended features, of which nine have galaxies coincided with the X-ray-measured positions (within the positional error circles). These extended sources have luminosities lying in the range (3 - 30) x 10(exp 40) ergs/s and gas masses of approximately (1 - 30) x 10(exp 9) solar mass, if the X-rays are of thermal origin. We confirm the presence of extended features in A1367 first reported by Bechtold et al. (1983). The nature of these systems remains uncertain. The luminosities are large if the emission is attributed to single galaxies, and several of the extended features have no associated galaxy counterparts. The extended features may be associated with galaxy groups, as suggested by Canizares, Fabbiano, & Trinchieri (1987), although the number required is large.

  13. Incentive Analysis for Clean Water Act Reauthorization: Point Source/Nonpoint Source Trading for Nutrient Discharge Reductions (1992)

    EPA Pesticide Factsheets

    Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.

  14. Microbial Source Module (MSM): Documenting the Science and Software for Discovery, Evaluation, and Integration

    EPA Science Inventory

    The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...

  15. Crosscutting Airborne Remote Sensing Technologies for Oil and Gas and Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Aubrey, A. D.; Frankenberg, C.; Green, R. O.; Eastwood, M. L.; Thompson, D. R.; Thorpe, A. K.

    2015-01-01

    Airborne imaging spectroscopy has evolved dramatically since the 1980s as a robust remote sensing technique used to generate 2-dimensional maps of surface properties over large spatial areas. Traditional applications for passive airborne imaging spectroscopy include interrogation of surface composition, such as mapping of vegetation diversity and surface geological composition. Two recent applications are particularly relevant to the needs of both the oil and gas as well as government sectors: quantification of surficial hydrocarbon thickness in aquatic environments and mapping atmospheric greenhouse gas components. These techniques provide valuable capabilities for petroleum seepage in addition to detection and quantification of fugitive emissions. New empirical data that provides insight into the source strength of anthropogenic methane will be reviewed, with particular emphasis on the evolving constraints enabled by new methane remote sensing techniques. Contemporary studies attribute high-strength point sources as significantly contributing to the national methane inventory and underscore the need for high performance remote sensing technologies that provide quantitative leak detection. Imaging sensors that map spatial distributions of methane anomalies provide effective techniques to detect, localize, and quantify fugitive leaks. Airborne remote sensing instruments provide the unique combination of high spatial resolution (<1 m) and large coverage required to directly attribute methane emissions to individual emission sources. This capability cannot currently be achieved using spaceborne sensors. In this study, results from recent NASA remote sensing field experiments focused on point-source leak detection, will be highlighted. This includes existing quantitative capabilities for oil and methane using state-of-the-art airborne remote sensing instruments. While these capabilities are of interest to NASA for assessment of environmental impact and global climate change, industry similarly seeks to detect and localize leaks of both oil and methane across operating fields. In some cases, higher sensitivities desired for upstream and downstream applications can only be provided by new airborne remote sensing instruments tailored specifically for a given application. There exists a unique opportunity for alignment of efforts between commercial and government sectors to advance the next generation of instruments to provide more sensitive leak detection capabilities, including those for quantitative source strength determination.

  16. 40 CFR 414.90 - Applicability; description of the subcategory of direct discharge point sources that use end-of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...

  17. 40 CFR Table 4 to Part 455 - BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological Treatment 4 Table 4... Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological...

  18. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  19. [Spatial heterogeneity and classified control of agricultural non-point source pollution in Huaihe River Basin].

    PubMed

    Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua

    2013-02-01

    Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.

  20. MAD Adaptive Optics Imaging of High-luminosity Quasars: A Pilot Project

    NASA Astrophysics Data System (ADS)

    Liuzzo, E.; Falomo, R.; Paiano, S.; Treves, A.; Uslenghi, M.; Arcidiacono, C.; Baruffolo, A.; Diolaiti, E.; Farinato, J.; Lombini, M.; Moretti, A.; Ragazzoni, R.; Brast, R.; Donaldson, R.; Kolb, J.; Marchetti, E.; Tordo, S.

    2016-08-01

    We present near-IR images of five luminous quasars at z ˜ 2 and one at z ˜ 4 obtained with an experimental adaptive optics (AO) instrument at the European Southern Observatory Very Large Telescope. The observations are part of a program aimed at demonstrating the capabilities of multi-conjugated adaptive optics imaging combined with the use of natural guide stars for high spatial resolution studies on large telescopes. The observations were mostly obtained under poor seeing conditions but in two cases. In spite of these nonoptimal conditions, the resulting images of point sources have cores of FWHM ˜ 0.2 arcsec. We are able to characterize the host galaxy properties for two sources and set stringent upper limits to the galaxy luminosity for the others. We also report on the expected capabilities for investigating the host galaxies of distant quasars with AO systems coupled with future Extremely Large Telescopes. Detailed simulations show that it will be possible to characterize compact (2-3 kpc) quasar host galaxies for quasi-stellar objects at z = 2 with nucleus K-magnitude spanning from 15 to 20 (corresponding to absolute magnitude -31 to -26) and host galaxies that are 4 mag fainter than their nuclei.

  1. Observing two dark accelerators around the Galactic Centre with Fermi Large Area Telescope

    NASA Astrophysics Data System (ADS)

    Hui, C. Y.; Yeung, P. K. H.; Ng, C. W.; Lin, L. C. C.; Tam, P. H. T.; Cheng, K. S.; Kong, A. K. H.; Chernyshov, D. O.; Dogiel, V. A.

    2016-04-01

    We report the results from a detailed γ-ray investigation in the field of two `dark accelerators', HESS J1745-303 and HESS J1741-302, with 6.9 yr of data obtained by the Fermi Large Area Telescope. For HESS J1745-303, we found that its MeV-GeV emission is mainly originated from the `Region A' of the TeV feature. Its γ-ray spectrum can be modelled with a single power law with a photon index of Γ ˜ 2.5 from few hundreds MeV-TeV. Moreover, an elongated feature, which extends from `Region A' towards north-west for ˜1.3°, is discovered for the first time. The orientation of this feature is similar to that of a large-scale atomic/molecular gas distribution. For HESS J1741-302, our analysis does not yield any MeV-GeV counterpart for this unidentified TeV source. On the other hand, we have detected a new point source, Fermi J1740.1-3013, serendipitously. Its spectrum is apparently curved which resembles that of a γ-ray pulsar. This makes it possibly associated with PSR B1737-20 or PSR J1739-3023.

  2. Estimating population exposure to ambient polycyclic aromatic hydrocarbon in the United States - Part II: Source apportionment and cancer risk assessment.

    PubMed

    Zhang, Jie; Wang, Peng; Li, Jingyi; Mendola, Pauline; Sherman, Seth; Ying, Qi

    2016-12-01

    A revised Community Multiscale Air Quality (CMAQ) model was developed to simulate the emission, reactions, transport, deposition and gas-to-particle partitioning processes of 16 priority polycyclic aromatic hydrocarbons (PAHs), as described in Part I of the two-part series. The updated CMAQ model was applied in this study to quantify the contributions of different emission sources to the predicted PAH concentrations and excess cancer risk in the United States (US) in 2011. The cancer risk in the continental US due to inhalation exposure of outdoor naphthalene (NAPH) and seven larger carcinogenic PAHs (cPAHs) was predicted to be significant. The incremental lifetime cancer risk (ILCR) exceeds 1×10 -5 in many urban and industrial areas. Exposure to PAHs was estimated to result in 5704 (608-10,800) excess lifetime cancer cases. Point sources not related with energy generation and the oil and gas processes account for approximately 31% of the excess cancer cases, followed by non-road engines with 18.6% contributions. Contributions of residential wood combustion (16.2%) are similar to that of transportation-related sources (mostly motor vehicles with small contributions from railway and marine vessels; 13.4%). The oil and gas industry emissions, although large contributors to high concentrations of cPAHs regionally, are only responsible of 4.3% of the excess cancer cases, which is similar to the contributions of non-US sources (6.8%) and non-point sources (7.2%). The power generation units pose the most minimal impact on excess cancer risk, with contributions of approximately 2.3%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. A large point-source outbreak of Salmonella Typhimurium linked to chicken, pork and salad rolls from a Vietnamese bakery in Sydney.

    PubMed

    Norton, Sophie; Huhtinen, Essi; Conaty, Stephen; Hope, Kirsty; Campbell, Brett; Tegel, Marianne; Boyd, Rowena; Cullen, Beth

    2012-04-01

    In January 2011, Sydney South West Public Health Unit was notified of a large number of people presenting with gastroenteritis over two days at a local hospital emergency department (ED). Case-finding was conducted through hospital EDs and general practitioners, which resulted in the notification of 154 possible cases, from which 83 outbreak cases were identified. Fifty-eight cases were interviewed about demographics, symptom profile and food histories. Stool samples were collected and submitted for analysis. An inspection was conducted at a Vietnamese bakery and food samples were collected and submitted for analysis. Further case ascertainment occurred to ensure control measures were successful. Of the 58 interviewed cases, the symptom profile included diarrhoea (100%), fever (79.3%) and vomiting (89.7%). Salmonella Typhimurium multiple-locus-variable number tandem repeats analysis (MLVA) type 3-10-8-9-523 was identified in 95.9% (47/49) of stool samples. Cases reported consuming chicken, pork or salad rolls from a single Vietnamese bakery. Environmental swabs detected widespread contamination with Salmonella at the premises. This was a large point-source outbreak associated with the consumption of Vietnamese-style pork, chicken and salad rolls. These foods have been responsible for significant outbreaks in the past. The typical ingredients of raw egg butter or mayonnaise and pate are often implicated, as are the food-handling practices in food outlets. This indicates the need for education in better food-handling practices, including the benefits of using safer products. Ongoing surveillance will monitor the success of new food regulations introduced in New South Wales during 2011 for improving food-handling practices and reducing foodborne illness.

  5. A large point-source outbreak of Salmonella Typhimurium linked to chicken, pork and salad rolls from a Vietnamese bakery in Sydney

    PubMed Central

    Huhtinen, Essi; Conaty, Stephen; Hope, Kirsty; Campbell, Brett; Tegel, Marianne; Boyd, Rowena; Cullen, Beth

    2012-01-01

    Introduction In January 2011, Sydney South West Public Health Unit was notified of a large number of people presenting with gastroenteritis over two days at a local hospital emergency department (ED). Methods Case-finding was conducted through hospital EDs and general practitioners, which resulted in the notification of 154 possible cases, from which 83 outbreak cases were identified. Fifty-eight cases were interviewed about demographics, symptom profile and food histories. Stool samples were collected and submitted for analysis. An inspection was conducted at a Vietnamese bakery and food samples were collected and submitted for analysis. Further case ascertainment occurred to ensure control measures were successful. Results Of the 58 interviewed cases, the symptom profile included diarrhoea (100%), fever (79.3%) and vomiting (89.7%). Salmonella Typhimurium multiple-locus-variable number tandem repeats analysis (MLVA) type 3–10–8-9–523 was identified in 95.9% (47/49) of stool samples. Cases reported consuming chicken, pork or salad rolls from a single Vietnamese bakery. Environmental swabs detected widespread contamination with Salmonella at the premises. Discussion This was a large point-source outbreak associated with the consumption of Vietnamese-style pork, chicken and salad rolls. These foods have been responsible for significant outbreaks in the past. The typical ingredients of raw egg butter or mayonnaise and pate are often implicated, as are the food-handling practices in food outlets. This indicates the need for education in better food-handling practices, including the benefits of using safer products. Ongoing surveillance will monitor the success of new food regulations introduced in New South Wales during 2011 for improving food-handling practices and reducing foodborne illness. PMID:23908908

  6. Imaging a Fault Boundary System Using Controlled-Source Data Recorded on a Large-N Seismic Array

    NASA Astrophysics Data System (ADS)

    Paschall, O. C.; Chen, T.; Snelson, C. M.; Ralston, M. D.; Rowe, C. A.

    2016-12-01

    The Source Physics Experiment (SPE) is a series of chemical explosions conducted in southern Nevada with an objective of improving nuclear explosion monitoring. Five chemical explosions have occurred thus far in granite, the most recent being SPE-5 on April 26, 2016. The SPE series will improve our understanding of seismic wave propagation (primarily S-waves) due to explosions, and allow better discrimination of background seismicity such as earthquakes and explosions. The Large-N portion of the project consists of 996 receiver stations. Half of the stations were vertical component and the other half were three-component geophones. All receivers were deployed for 30 days and recorded the SPE-5 shot, earthquakes, noise, and an additional controlled-source: a large weight-drop, which is a 13,000 kg modified industrial pile driver. In this study, we undertake reflection processing of waveforms from the weight-drop, as recorded by a line of sensors extracted from the Large-N array. The profile is 1.2 km in length with 25 m station spacing and 100 m shot point spacing. This profile crosses the Boundary Fault that separates granite body and an alluvium basin, a strong acoustic impedance boundary that scatters seismic energy into S-waves and coda. The data were processed with traditional seismic reflection processing methods that include filtering, deconvolution, and stacking. The stack will be used to extract the location of the splays of the Boundary Fault and provide geologic constraints to the modeling and simulation teams within the SPE project.

  7. An improved export coefficient model to estimate non-point source phosphorus pollution risks under complex precipitation and terrain conditions.

    PubMed

    Cheng, Xian; Chen, Liding; Sun, Ranhao; Jing, Yongcai

    2018-05-15

    To control non-point source (NPS) pollution, it is important to estimate NPS pollution exports and identify sources of pollution. Precipitation and terrain have large impacts on the export and transport of NPS pollutants. We established an improved export coefficient model (IECM) to estimate the amount of agricultural and rural NPS total phosphorus (TP) exported from the Luanhe River Basin (LRB) in northern China. The TP concentrations of rivers from 35 selected catchments in the LRB were used to test the model's explanation capacity and accuracy. The simulation results showed that, in 2013, the average TP export was 57.20 t at the catchment scale. The mean TP export intensity in the LRB was 289.40 kg/km 2 , which was much higher than those of other basins in China. In the LRB topographic regions, the TP export intensity was the highest in the south Yanshan Mountains and was followed by the plain area, the north Yanshan Mountains, and the Bashang Plateau. Among the three pollution categories, the contribution ratios to TP export were, from high to low, the rural population (59.44%), livestock husbandry (22.24%), and land-use types (18.32%). Among all ten pollution sources, the contribution ratios from the rural population (59.44%), pigs (14.40%), and arable land (10.52%) ranked as the top three sources. This study provides information that decision makers and planners can use to develop sustainable measures for the prevention and control of NPS pollution in semi-arid regions.

  8. Bright and durable field emission source derived from refractory taylor cones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirsch, Gregory

    A method of producing field emitters having improved brightness and durability relying on the creation of a liquid Taylor cone from electrically conductive materials having high melting points. The method calls for melting the end of a wire substrate with a focused laser beam, while imposing a high positive potential on the material. The resulting molten Taylor cone is subsequently rapidly quenched by cessation of the laser power. Rapid quenching is facilitated in large part by radiative cooling, resulting in structures having characteristics closely matching that of the original liquid Taylor cone. Frozen Taylor cones thus obtained yield desirable tipmore » end forms for field emission sources in electron beam applications. Regeneration of the frozen Taylor cones in-situ is readily accomplished by repeating the initial formation procedures. The high temperature liquid Taylor cones can also be employed as bright ion sources with chemical elements previously considered impractical to implement.« less

  9. Effective pollutant emission heights for atmospheric transport modelling based on real-world information.

    PubMed

    Pregger, Thomas; Friedrich, Rainer

    2009-02-01

    Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.

  10. Open Source Drug Discovery: Highly Potent Antimalarial Compounds Derived from the Tres Cantos Arylpyrroles

    PubMed Central

    2016-01-01

    The development of new antimalarial compounds remains a pivotal part of the strategy for malaria elimination. Recent large-scale phenotypic screens have provided a wealth of potential starting points for hit-to-lead campaigns. One such public set is explored, employing an open source research mechanism in which all data and ideas were shared in real time, anyone was able to participate, and patents were not sought. One chemical subseries was found to exhibit oral activity but contained a labile ester that could not be replaced without loss of activity, and the original hit exhibited remarkable sensitivity to minor structural change. A second subseries displayed high potency, including activity within gametocyte and liver stage assays, but at the cost of low solubility. As an open source research project, unexplored avenues are clearly identified and may be explored further by the community; new findings may be cumulatively added to the present work. PMID:27800551

  11. Sources of anthropogenic platinum-group elements (PGE): Automotive catalysts versus PGE-processing industries.

    PubMed

    Zereini, F; Dirksen, F; Skerstupp, B; Urban, H

    1998-01-01

    Soil samples from the area of Hanau (Hessen, Germany) were analyzed for anthropogenic platinum-group elements (PGE). The results confirm the existence of two different sources for anthropogenic PGE: 1. automotive catalysts, and 2. PGE-processing plants. Both sources emit qualitatively and quantitatively different PGE spectra and PGE interelemental ratios (especially the Pt/Rh ratio). Elevated PGE values which are due to automotive catalysts are restricted to a narrow-range along roadside soil, whereas those due to PGE-processing plants display a large-area dispersion. The emitted PGE-containing particles in the case of automotive catalysts are subject to transport by wind and water, whereas those from PGE-processing plants are preferably transported by wind. This points to a different aerodynamic particle size. Pt, Pd, and Rh concentrations along motorways are dependent on the amount of traffic and the driving characteristics.

  12. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  13. Optimal pupil design for confocal microscopy

    NASA Astrophysics Data System (ADS)

    Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.

    2010-02-01

    Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current instruments are large, complex, and expensive. A simpler, confocal line-scanning microscope may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A confocal reflectance microscope may use a beamsplitter, transmitting and detecting through the pupil, or a divided pupil, or theta configuration, with half used for transmission and half for detection. The divided pupil may offer better sectioning and contrast. We present a Fourier optics model and compare the on-axis irradiance of a confocal point-scanning microscope in both pupil configurations, optimizing the profile of a Gaussian beam in a circular or semicircular aperture. We repeat both calculations with a cylindrical lens which focuses the source to a line. The variable parameter is the fillfactor, h, the ratio of the 1/e2 diameter of the Gaussian beam to the diameter of the full aperture. The optimal values of h, for point scanning are 0.90 (full) and 0.66 for the half-aperture. For line-scanning, the fill-factors are 1.02 (full) and 0.52 (half). Additional parameters to consider are the optimal location of the point-source beam in the divided-pupil configuration, the optimal line width for the line-source, and the width of the aperture in the divided-pupil configuration. Additional figures of merit are field-of-view and sectioning. Use of optimal designs is critical in comparing the experimental performance of the different configurations.

  14. Pulsed x-ray imaging of high-density objects using a ten picosecond high-intensity laser driver

    NASA Astrophysics Data System (ADS)

    Rusby, D. R.; Brenner, C. M.; Armstrong, C.; Wilson, L. A.; Clarke, R.; Alejo, A.; Ahmed, H.; Butler, N. M. H.; Haddock, D.; Higginson, A.; McClymont, A.; Mirfayzi, S. R.; Murphy, C.; Notley, M.; Oliver, P.; Allott, R.; Hernandez-Gomez, C.; Kar, S.; McKenna, P.; Neely, D.

    2016-10-01

    Point-like sources of X-rays that are pulsed (sub nanosecond), high energy (up to several MeV) and bright are very promising for industrial and security applications where imaging through large and dense objects is required. Highly penetrating X-rays can be produced by electrons that have been accelerated by a high intensity laser pulse incident onto a thin solid target. We have used a pulse length of 10ps to accelerate electrons to create a bright x-ray source. The bremsstrahlung temperature was measured for a laser intensity from 8.5-12×1018 W/cm2. These x-rays have sequentially been used to image high density materials using image plate and a pixelated scintillator system.

  15. The acoustic field in the ionosphere caused by an underground nuclear explosion

    NASA Astrophysics Data System (ADS)

    Krasnov, V. M.; Drobzheva, Ya. V.

    2005-07-01

    The problem of describing the generation and propagation of an infrasonic wave emitted by a finite extended source in the inhomogeneous absorbing atmosphere is the focus of this paper. It is of interest since the role of infrasonic waves in the energy balance of the upper atmosphere remains largely unknown. We present an algorithm, which allows adaptation of a point source model for calculating the infrasonic field from an underground nuclear explosion at ionospheric altitudes. Our calculations appear to agree remarkably well with HF Doppler sounding data measured for underground nuclear explosions at the Semipalatinsk Test Site. We show that the temperature and ionospheric electron density perturbation caused by an acoustic wave from underground nuclear explosion can reach 10% of background levels.

  16. Near-field interferometry of a free-falling nanoparticle from a point-like source

    NASA Astrophysics Data System (ADS)

    Bateman, James; Nimmrichter, Stefan; Hornberger, Klaus; Ulbricht, Hendrik

    2014-09-01

    Matter-wave interferometry performed with massive objects elucidates their wave nature and thus tests the quantum superposition principle at large scales. Whereas standard quantum theory places no limit on particle size, alternative, yet untested theories—conceived to explain the apparent quantum to classical transition—forbid macroscopic superpositions. Here we propose an interferometer with a levitated, optically cooled and then free-falling silicon nanoparticle in the mass range of one million atomic mass units, delocalized over >150 nm. The scheme employs the near-field Talbot effect with a single standing-wave laser pulse as a phase grating. Our analysis, which accounts for all relevant sources of decoherence, indicates that this is a viable route towards macroscopic high-mass superpositions using available technology.

  17. The Study of Clusters of Galaxies and Large Scale Structures

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Many research projects have been initiated and completed under support of this program. The results are summarized below. The work on the ROSAT Deep Survey has been successfully completed. A number of interesting results have been established within this joint MPE, Cal Tech, JHU, ST ScI, ESO collaboration. First, a very large fraction, 70-80 percent, of the X-ray background has been directly resolved into point sources. We have derived a new log N-log S for X-ray sources and have measured a source density of 970 sources per square degree at a limiting flux level. Care was taken in these studies to accurately model and measure the effects of sources confusion. This was possible because of our observing strategy which included both deep PSPC and HRI observations. No evidence of a population of narrow emission line galaxies has been established but some evidence for the evolution of low luminosity AGN (Seyfert galaxies) has been reported. The work on the ROSAT All Sky Survey Northern Cluster Survey has been substantially concluded but the publication of the list has been held up by the need to analyze newly re-calibrated data. This should result in publication over the next year. During the past year we have submitted a paper to the Astrophysical Journal which utilized a sample of clusters originally selected from the ROSAT All-sky survey at redshifts greater than 0.3. This sample was studied with ASCA to determine temperature and luminosity.

  18. Emissions of Volatile Organic Compounds (VOCs) Associated with Natural Gas Production in the Uintah Basin, Utah

    NASA Astrophysics Data System (ADS)

    Warneke, C.; Geiger, F.; Zahn, A.; Graus, M.; De Gouw, J. A.; Gilman, J. B.; Lerner, B. M.; Roberts, J. M.; Edwards, P. M.; Dube, W. P.; Brown, S. S.; Peischl, J.; Ryerson, T. B.; Williams, E. J.; Petron, G.; Kofler, J.; Sweeney, C.; Karion, A.; Dlugokencky, E. J.

    2012-12-01

    Technological advances such as hydraulic fracturing have led to a rapid increase in the production of natural gas from several basins in the Rocky Mountain West, including the Denver-Julesburg basin in Colorado, the Uintah basin in Utah and the Upper Green River basin in Wyoming. There are significant concerns about the impact of natural gas production on the atmosphere, including (1) emissions of methane, which determine the net climate impact of this energy source, (2) emissions of reactive hydrocarbons and nitrogen oxides, and their contribution to photochemical ozone formation, and (3) emissions of air toxics with direct health effects. The Energy & Environment - Uintah Basin Wintertime Ozone Study (UBWOS) in 2012 was focused on addressing these issues. During UBWOS, measurements of volatile organic compounds (VOCs) were made using proton-transfer-reaction mass spectrometry (PTR-MS) instruments from a ground site and a mobile laboratory. Measurements at the ground site showed mixing ratios of VOCs related to oil and gas extraction were greatly enhanced in the Uintah basin, including several days long periods of elevated mixing ratios and concentrated short term plumes. Diurnal variations were observed with large mixing ratios during the night caused by low nighttime mixing heights and a shift in wind direction during the day. The mobile laboratory sampled a wide variety of individual parts of the gas production infrastructure including active gas wells and various processing plants. Included in those point sources was a new well that was sampled by the mobile laboratory 11 times within two weeks. This new well was previously hydraulically fractured and had an active flow-back pond. Very high mixing ratios of aromatics were observed close to the flow-back pond. The measurements of the mobile laboratory are used to determine the source composition of the individual point sources and those are compared to the VOC enhancement ratios observed at the ground site. The source composition of most point sources was similar to the typical enhancement ratios observed at the ground site, whereas the new well with the flow-back pond showed a somewhat different composition.

  19. Open Source Cloud-Based Technologies for Bim

    NASA Astrophysics Data System (ADS)

    Logothetis, S.; Karachaliou, E.; Valari, E.; Stylianidis, E.

    2018-05-01

    This paper presents a Cloud-based open source system for storing and processing data from a 3D survey approach. More specifically, we provide an online service for viewing, storing and analysing BIM. Cloud technologies were used to develop a web interface as a BIM data centre, which can handle large BIM data using a server. The server can be accessed by many users through various electronic devices anytime and anywhere so they can view online 3D models using browsers. Nowadays, the Cloud computing is engaged progressively in facilitating BIM-based collaboration between the multiple stakeholders and disciplinary groups for complicated Architectural, Engineering and Construction (AEC) projects. Besides, the development of Open Source Software (OSS) has been rapidly growing and their use tends to be united. Although BIM and Cloud technologies are extensively known and used, there is a lack of integrated open source Cloud-based platforms able to support all stages of BIM processes. The present research aims to create an open source Cloud-based BIM system that is able to handle geospatial data. In this effort, only open source tools will be used; from the starting point of creating the 3D model with FreeCAD to its online presentation through BIMserver. Python plug-ins will be developed to link the two software which will be distributed and freely available to a large community of professional for their use. The research work will be completed by benchmarking four Cloud-based BIM systems: Autodesk BIM 360, BIMserver, Graphisoft BIMcloud and Onuma System, which present remarkable results.

  20. Remotely measuring populations during a crisis by overlaying two data sources

    PubMed Central

    Bharti, Nita; Lu, Xin; Bengtsson, Linus; Wetter, Erik; Tatem, Andrew J.

    2015-01-01

    Background Societal instability and crises can cause rapid, large-scale movements. These movements are poorly understood and difficult to measure but strongly impact health. Data on these movements are important for planning response efforts. We retrospectively analyzed movement patterns surrounding a 2010 humanitarian crisis caused by internal political conflict in Côte d'Ivoire using two different methods. Methods We used two remote measures, nighttime lights satellite imagery and anonymized mobile phone call detail records, to assess average population sizes as well as dynamic population changes. These data sources detect movements across different spatial and temporal scales. Results The two data sources showed strong agreement in average measures of population sizes. Because the spatiotemporal resolution of the data sources differed, we were able to obtain measurements on long- and short-term dynamic elements of populations at different points throughout the crisis. Conclusions Using complementary, remote data sources to measure movement shows promise for future use in humanitarian crises. We conclude with challenges of remotely measuring movement and provide suggestions for future research and methodological developments. PMID:25733558

  1. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh

    PubMed Central

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005

  2. Searches for point sources in the Galactic Center region

    NASA Astrophysics Data System (ADS)

    di Mauro, Mattia; Fermi-LAT Collaboration

    2017-01-01

    Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.

  3. Seabird colonies as relevant sources of pollutants in Antarctic ecosystems: Part 1 - Trace elements.

    PubMed

    Cipro, C V Z; Bustamante, P; Petry, M V; Montone, R C

    2018-08-01

    Global distillation is classically pointed as the biggest responsible for contaminant inputs in Polar ecosystems. Mercury (Hg) and other trace elements (TEs) also present natural sources, whereas the biologically mediated input is typically ignored. However, bioaccumulation and biomagnification combined with the fact that seabirds gather in large numbers into large colonies and excrete on land might represent an important local TEs input. A previous work suggested these colonies as sources of not only nutrients, but also organic contaminants. To evaluate a similar hypothesis for TEs, samples of lichen (n = 55), mosses (n = 58) and soil (n = 37) were collected in 13 locations within the South Shetlands Archipelago during the austral summers of 2013-14 and 2014-15. They were divided in: "colony" (within the colony itself for soil and bordering it for vegetation) and "control" (at least 50 m away from colony interference), analysed for TEs (As, Cd, Co, Cr, Cu, Fe, Hg, Mn, Ni, Pb, Se, V, and Zn) and stable isotopes (C and N). In most cases, soil seems the best matrix to assess colonies as TEs sources, as it presented more differences between control/colony sites than vegetation. Colonies are clearly local sources of organic matter, Cd, Hg and likely of As, Se and Zn. Conversely, Co, Cr, Ni and Pb come presumably from other sources, natural or anthropogenic. In general, isotopes were more useful for interpreting vegetation data due to fractionation of absorbed animal-derived organic matter. Other local Hg sources could be inferred from high levels in control sites, location and wind patterns. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Methane sources in Hong Kong - identification by mobile measurement and isotopic analysis

    NASA Astrophysics Data System (ADS)

    Fisher, Rebecca; Brownlow, Rebecca; Lowry, David; Lanoisellé, Mathias; Nisbet, Euan

    2017-04-01

    Hong Kong (22.4°N, 114.1°E) has a wide variety of natural and anthropogenic sources of methane within a small densely populated area (1106 km2, population ˜7.3 million). These include emissions from important source categories that have previously been poorly studied in tropical regions such as agriculture and wetlands. According to inventories (EDGAR v.4.2) anthropogenic methane emissions are mainly from solid waste disposal, wastewater disposal and fugitive leaks from oil and gas. Methane mole fraction was mapped out across Hong Kong during a mobile measurement campaign in July 2016. This technique allows rapid detection of the locations of large methane emissions which may focus targets for efforts to reduce emissions. Methane is mostly emitted from large point sources, with highest concentrations measured close to active landfill sites, sewage works and a gas processing plant. Air samples were collected close to sources (landfills, sewage works, gas processing plant, wetland, rice, traffic, cows and water buffalo) and analysed by mass spectrometry to determine the δ13C isotopic signatures to extend the database of δ13C isotopic signatures of methane from tropical regions. Isotopic signatures of methane sources in Hong Kong range from -70 ‰ (cows) to -37 ‰ (gas processing). Regular sampling of air for methane mole fraction and δ13C has recently begun at the Swire Institute of Marine Science, situated at Cape d'Aguilar in the southeast of Hong Kong Island. This station receives air from important source regions: southerly marine air from the South China Sea in summer and northerly continental air in winter and measurements will allow an integrated assessment of emissions from the wider region.

  5. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

  6. Simulations of cm-wavelength Sunyaev-Zel'dovich galaxy cluster and point source blind sky surveys and predictions for the RT32/OCRA-f and the Hevelius 100-m radio telescope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lew, Bartosz; Kus, Andrzej; Birkinshaw, Mark

    We investigate the effectiveness of blind surveys for radio sources and galaxy cluster thermal Sunyaev-Zel'dovich effects (TSZEs) using the four-pair, beam-switched OCRA-f radiometer on the 32-m radio telescope in Poland. The predictions are based on mock maps that include the cosmic microwave background, TSZEs from hydrodynamical simulations of large scale structure formation, and unresolved radio sources. We validate the mock maps against observational data, and examine the limitations imposed by simplified physics. We estimate the effects of source clustering towards galaxy clusters from NVSS source counts around Planck-selected cluster candidates, and include appropriate correlations in our mock maps. The studymore » allows us to quantify the effects of halo line-of-sight alignments, source confusion, and telescope angular resolution on the detections of TSZEs. We perform a similar analysis for the planned 100-m Hevelius radio telescope (RTH) equipped with a 49-beam radio camera and operating at frequencies up to 22 GHz.We find that RT32/OCRA-f will be suitable for small-field blind radio source surveys, and will detect 33{sup +17}{sub −11} new radio sources brighter than 0.87 mJy at 30 GHz in a 1 deg{sup 2} field at > 5σ CL during a one-year, non-continuous, observing campaign, taking account of Polish weather conditions. It is unlikely that any galaxy cluster will be detected at 3σ CL in such a survey. A 60-deg{sup 2} survey, with field coverage of 2{sup 2} beams per pixel, at 15 GHz with the RTH, would find <1.5 galaxy clusters per year brighter than 60 μJy (at 3σ CL), and would detect about 3.4 × 10{sup 4} point sources brighter than 1 mJy at 5σ CL, with confusion causing flux density errors ∼< 2% (20%) in 68% (95%) of the detected sources.A primary goal of the planned RTH will be a wide-area (π sr) radio source survey at 15 GHz. This survey will detect nearly 3 × 10{sup 5} radio sources at 5σ CL down to 1.3 mJy, and tens of galaxy clusters, in one year of operation with typical weather conditions. Confusion will affect the measured flux densities by ∼< 1.5% (16%) for 68% (95%) of the point sources. We also gauge the impact of the RTH by investigating its performance if equipped with the existing RT32 receivers, and the performance of the RT32 equipped with the RTH radio camera.« less

  7. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  8. Trends in PM2.5 emissions, concentrations and apportionments in Detroit and Chicago

    NASA Astrophysics Data System (ADS)

    Milando, Chad; Huang, Lei; Batterman, Stuart

    2016-03-01

    PM2.5 concentrations throughout much of the U.S. have decreased over the last 15 years, but emissions and concentration trends can vary by location and source type. Such trends should be understood to inform air quality management and policies. This work examines trends in emissions, concentrations and source apportionments in two large Midwest U.S. cities, Detroit, Michigan, and Chicago, Illinois. Annual and seasonal trends were investigated using National Emission Inventory (NEI) data for 2002 to 2011, speciated ambient PM2.5 data from 2001 to 2014, apportionments from positive matrix factorization (PMF) receptor modeling, and quantile regression. Over the study period, county-wide data suggest emissions from point sources decreased (Detroit) or held constant (Chicago), while emissions from on-road mobile sources were constant (Detroit) or increased (Chicago), however changes in methodology limit the interpretation of inventory trends. Ambient concentration data also suggest source and apportionment trends, e.g., annual median concentrations of PM2.5 in the two cities declined by 3.2-3.6%/yr (faster than national trends), and sulfate concentrations (due to coal-fired facilities and other point source emissions) declined even faster; in contrast, organic and elemental carbon (tracers of gasoline and diesel vehicle exhaust) declined more slowly or held constant. The PMF models identified nine sources in Detroit and eight in Chicago, the most important being secondary sulfate, secondary nitrate and vehicle emissions. A minor crustal dust source, metals sources, and a biomass source also were present in both cities. These apportionments showed that the median relative contributions from secondary sulfate sources decreased by 4.2-5.5% per year in Detroit and Chicago, while contributions from metals sources, biomass sources, and vehicles increased from 1.3 to 9.2% per year. This first application of quantile regression to trend analyses of speciated PM2.5 data reveals that source contributions to PM2.5 varied as PM2.5 concentrations decreased, and that the fraction of PM2.5 due to emissions from vehicles and other local emissions has increased. Each data source has uncertainties, but emissions, monitoring and PMF data provide complementary information that can help to discern trends and identify contributing sources. Study results emphasize the need to target specific sources in policies and regulations aimed at decreasing PM2.5 concentrations in urban areas.

  9. Point and Compact Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.

    2017-12-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.

  10. [Estimation of nonpoint source pollutant loads and optimization of the best management practices (BMPs) in the Zhangweinan River basin].

    PubMed

    Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin

    2013-03-01

    One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.

  11. Site correction of stochastic simulation in southwestern Taiwan

    NASA Astrophysics Data System (ADS)

    Lun Huang, Cong; Wen, Kuo Liang; Huang, Jyun Yan

    2014-05-01

    Peak ground acceleration (PGA) of a disastrous earthquake, is concerned both in civil engineering and seismology study. Presently, the ground motion prediction equation is widely used for PGA estimation study by engineers. However, the local site effect is another important factor participates in strong motion prediction. For example, in 1985 the Mexico City, 400km far from the epicenter, suffered massive damage due to the seismic wave amplification from the local alluvial layers. (Anderson et al., 1986) In past studies, the use of stochastic method had been done and showed well performance on the simulation of ground-motion at rock site (Beresnev and Atkinson, 1998a ; Roumelioti and Beresnev, 2003). In this study, the site correction was conducted by the empirical transfer function compared with the rock site response from stochastic point-source (Boore, 2005) and finite-fault (Boore, 2009) methods. The error between the simulated and observed Fourier spectrum and PGA are calculated. Further we compared the estimated PGA to the result calculated from ground motion prediction equation. The earthquake data used in this study is recorded by Taiwan Strong Motion Instrumentation Program (TSMIP) from 1991 to 2012; the study area is located at south-western Taiwan. The empirical transfer function was generated by calculating the spectrum ratio between alluvial site and rock site (Borcheret, 1970). Due to the lack of reference rock site station in this area, the rock site ground motion was generated through stochastic point-source model instead. Several target events were then chosen for stochastic point-source simulating to the halfspace. Then, the empirical transfer function for each station was multiplied to the simulated halfspace response. Finally, we focused on two target events: the 1999 Chi-Chi earthquake (Mw=7.6) and the 2010 Jiashian earthquake (Mw=6.4). Considering the large event may contain with complex rupture mechanism, the asperity and delay time for each sub-fault is to be concerned. Both the stochastic point-source and the finite-fault model were used to check the result of our correction.

  12. Potency backprojection

    NASA Astrophysics Data System (ADS)

    Okuwaki, R.; Kasahara, A.; Yagi, Y.

    2017-12-01

    The backprojection (BP) method has been one of the powerful tools of tracking seismic-wave sources of the large/mega earthquakes. The BP method projects waveforms onto a possible source point by stacking them with the theoretical-travel-time shifts between the source point and the stations. Following the BP method, the hybrid backprojection (HBP) method was developed to enhance depth-resolution of projected images and mitigate the dummy imaging of the depth phases, which are shortcomings of the BP method, by stacking cross-correlation functions of the observed waveforms and theoretically calculated Green's functions (GFs). The signal-intensity of the BP/HBP image at a source point is related to how much of observed waveforms was radiated from that point. Since the amplitude of the GF associated with the slip-rate increases with depth as the rigidity increases with depth, the intensity of the BP/HBP image inherently has depth dependence. To make a direct comparison of the BP/HBP image with the corresponding slip distribution inferred from a waveform inversion, and discuss the rupture properties along the fault drawn from the waveforms in high- and low-frequencies with the BP/HBP methods and the waveform inversion, respectively, it is desirable to have the variants of BP/HBP methods that directly image the potency-rate-density distribution. Here we propose new formulations of the BP/HBP methods, which image the distribution of the potency-rate density by introducing alternative normalizing factors in the conventional formulations. For the BP method, the observed waveform is normalized with the maximum amplitude of P-phase of the corresponding GF. For the HBP method, we normalize the cross-correlation function with the squared-sum of the GF. The normalized waveforms or the cross-correlation functions are then stacked for all the stations to enhance the signal to noise ratio. We will present performance-tests of the new formulations by using synthetic waveforms and the real data of the Mw 8.3 2015 Illapel Chile earthquake, and further discuss the limitations of the new BP/HBP methods proposed in this study when they are used for exploring the rupture properties of the earthquakes.

  13. Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines

    PubMed Central

    Presas, Alexandre; Valero, Carme; Egusquiza, Eduard

    2018-01-01

    Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin. PMID:29601512

  14. Sensor-Based Optimized Control of the Full Load Instability in Large Hydraulic Turbines.

    PubMed

    Presas, Alexandre; Valentin, David; Egusquiza, Mònica; Valero, Carme; Egusquiza, Eduard

    2018-03-30

    Hydropower plants are of paramount importance for the integration of intermittent renewable energy sources in the power grid. In order to match the energy generated and consumed, Large hydraulic turbines have to work under off-design conditions, which may lead to dangerous unstable operating points involving the hydraulic, mechanical and electrical system. Under these conditions, the stability of the grid and the safety of the power plant itself can be compromised. For many Francis Turbines one of these critical points, that usually limits the maximum output power, is the full load instability. Therefore, these machines usually work far away from this unstable point, reducing the effective operating range of the unit. In order to extend the operating range of the machine, working closer to this point with a reasonable safety margin, it is of paramount importance to monitor and to control relevant parameters of the unit, which have to be obtained with an accurate sensor acquisition strategy. Within the framework of a large EU project, field tests in a large Francis Turbine located in Canada (rated power of 444 MW) have been performed. Many different sensors were used to monitor several working parameters of the unit for all its operating range. Particularly for these tests, more than 80 signals, including ten type of different sensors and several operating signals that define the operating point of the unit, were simultaneously acquired. The present study, focuses on the optimization of the acquisition strategy, which includes type, number, location, acquisition frequency of the sensors and corresponding signal analysis to detect the full load instability and to prevent the unit from reaching this point. A systematic approach to determine this strategy has been followed. It has been found that some indicators obtained with different types of sensors are linearly correlated with the oscillating power. The optimized strategy has been determined based on the correlation characteristics (linearity, sensitivity and reactivity), the simplicity of the installation and the acquisition frequency necessary. Finally, an economic and easy implementable protection system based on the resulting optimized acquisition strategy is proposed. This system, which can be used in a generic Francis turbine with a similar full load instability, permits one to extend the operating range of the unit by working close to the instability with a reasonable safety margin.

  15. The Third EGRET Catalog of High-Energy Gamma-Ray Sources

    NASA Technical Reports Server (NTRS)

    Hartman, R. C.; Bertsch, D. L.; Bloom, S. D.; Chen, A. W.; Deines-Jones, P.; Esposito, J. A.; Fichtel, C. E.; Friedlander, D. P.; Hunter, S. D.; McDonald, L. M.; hide

    1998-01-01

    The third catalog of high-energy gamma-ray sources detected by the EGRET telescope on the Compton Gamma Ray Observatory includes data from 1991 April 22 to 1995 October 3 (Cycles 1, 2, 3, and 4 of the mission). In addition to including more data than the second EGRET catalog (Thompson et al. 1995) and its supplement (Thompson et al. 1996), this catalog uses completely reprocessed data (to correct a number of mostly minimal errors and problems). The 271 sources (E greater than 100 MeV) in the catalog include the single 1991 solar flare bright enough to be detected as a source, the Large Magellanic Cloud, five pulsars, one probable radio galaxy detection (Cen A), and 66 high-confidence identifications of blazars (BL Lac objects, flat-spectrum radio quasars, or unidentified flat-spectrum radio sources). In addition, 27 lower-confidence potential blazar identifications are noted. Finally, the catalog contains 170 sources not yet identified firmly with known objects, although potential identifications have been suggested for a number of those. A figure is presented that gives approximate upper limits for gamma-ray sources at any point in the sky, as well as information about sources listed in the second catalog and its supplement which do not appear in this catalog.

  16. The X-Ray Globular Cluster Population in NGC 1399

    NASA Technical Reports Server (NTRS)

    Angelini, Lorella; Loewenstein, Michael; Mushotzky, Richard F.; White, Nicholas E. (Technical Monitor)

    2001-01-01

    We report on X-ray sources detected in the Chandra images of the elliptical galaxy NGC 1399 and identified with globular clusters (GCs). The 8'x 8' Chandra image shows that a large fraction of the 2-10 keV X-ray emission is resolved into point sources, with a luminosity threshold of 5 x 10 (exp 37) ergs s-1. These sources are most likely Low Mass X-ray Binaries (LMXBs). More than 70% of the X-ray sources, in a region imaged by Hubble Space Telescope (HST), are located within GCs. Many of these sources have super-Eddington luminosity (for an accreting neutron star) and their average luminosity is higher than the remaining sources. This association suggests that, in giant elliptical galaxies, luminous X-ray binaries preferentially form in GCs. The spectral properties of the GC and non-GC sources are in most cases similar to those of LMXBs in our galaxy. Two of the brightest sources, one of which is in GC, have a much softer spectra as seen in the high state black hole. The "apparent" super-Eddington luminosity in many cases may be due to multiple LMXB systems within individual GC, but with some of the most extreme luminous systems containing massive black holes.

  17. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  18. Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.

    DTIC Science & Technology

    1987-09-18

    breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.

  19. Gender and ethnic differences in young adolescents' sources of cigarettes.

    PubMed

    Robinson, L A; Klesges, R C; Zbikowski, S M

    1998-01-01

    To identify the sources used by young adolescents to obtain cigarettes. In early 1994 a survey assessing usual sources of cigarettes and characteristics of the respondents was administered in homeroom classes. A large urban, predominantly African American school system. A population-based sample of 6967 seventh graders averaging 13 years of age. Reports of usual sources of cigarettes. At this age level, young smokers were more likely to get cigarettes from friends (31.2%) than buy them in stores (14.3%). However, the odds of purchasing varied for different groups of children. Regular smokers were much more likely (48.3%) to have purchased cigarettes than experimental smokers (9.6%), p < 0.001. Girls were less likely to have bought their cigarettes than boys (p < 0.001), and black smokers were less likely to have purchased cigarettes than white children (p < 0.001). Results suggested that family members who smoke may constitute a more important source of tobacco products than previously recognised, particularly for young girls. In this middle-school sample, peers provided the major point of cigarette distribution. However, even at this age, direct purchase was not uncommon. Sources of cigarettes varied significantly with gender, ethnicity, and smoking rate.

  20. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

Top