Sample records for variable point sources

  1. Recent updates in developing a statistical pseudo-dynamic source-modeling framework to capture the variability of earthquake rupture scenarios

    NASA Astrophysics Data System (ADS)

    Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee

    2017-04-01

    It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.

  2. NON-POINT SOURCE POLLUTION

    EPA Science Inventory

    Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...

  3. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  4. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  5. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  6. Uncertainty analysis of the simulations of effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota

    USGS Publications Warehouse

    Wesolowski, Edwin A.

    1996-01-01

    Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.

  7. Strategies for satellite-based monitoring of CO2 from distributed area and point sources

    NASA Astrophysics Data System (ADS)

    Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David

    2014-05-01

    Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.

  8. A Peltier-based variable temperature source

    NASA Astrophysics Data System (ADS)

    Molki, Arman; Roof Baba, Abdul

    2014-11-01

    In this paper we propose a simple and cost-effective variable temperature source based on the Peltier effect using a commercially purchased thermoelectric cooler. The proposed setup can be used to quickly establish relatively accurate dry temperature reference points, which are necessary for many temperature applications such as thermocouple calibration.

  9. 1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra

    NASA Technical Reports Server (NTRS)

    Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.; hide

    2013-01-01

    We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.

  10. An analysis of job placement patterns of black and non-black male and female undergraduates at the University of Virginia and Hampton Institute. Ph.D. Thesis - Virginia Univ.

    NASA Technical Reports Server (NTRS)

    Anderson, A. F.

    1974-01-01

    Research questions were proposed to determine the relationship between independent variables (race, sex, and institution attended) and dependent variables (number of job offers received, salary received, and willingness to recommend source of employer contact). The control variables were academic major, grade point average, placement registration, nonemployment activity, employer, and source of employer contact. An analysis of the results revealed no statistical significance of the institution attended as a predictor of job offers or salary, although significant relationships were found between race and sex and number of job offers received. It was found that academic major, grade point average, and source of employer contact were more useful than race in the prediction of salary. Sex and nonemployment activity were found to be the most important variables in the model. The analysis also indicated that Black students received more job offers than non-Black students.

  11. The source provenance of an obsidian Eden point from Sierra County, New Mexico

    DOE PAGES

    Dolan, Sean Gregory; Berryman, Judy; Shackley, M. Steven

    2016-01-02

    Eden projectile points associated with the Cody complex are underrepresented in the late Paleoindian record of the American Southwest. EDXRF analysis of an obsidian Eden point from a site in Sierra County, New Mexico demonstrates this artifact is from the Cerro del Medio (Valles Rhyolite) source in the Jemez Mountains. Lastly, we contextualize our results by examining variability in obsidian procurement practices beyond the Cody heartland in southcentral New Mexico.

  12. Analyzing Variability in Landscape Nutrient Loading Using Spatially-Explicit Maps in the Great Lakes Basin

    NASA Astrophysics Data System (ADS)

    Hamlin, Q. F.; Kendall, A. D.; Martin, S. L.; Whitenack, H. D.; Roush, J. A.; Hannah, B. A.; Hyndman, D. W.

    2017-12-01

    Excessive loading of nitrogen and phosphorous to the landscape has caused biologically and economically damaging eutrophication and harmful algal blooms in the Great Lakes Basin (GLB) and across the world. We mapped source-specific loads of nitrogen and phosphorous to the landscape using broadly available data across the GLB. SENSMap (Spatially Explicit Nutrient Source Map) is a 30m resolution snapshot of nutrient loads ca. 2010. We use these maps to study variable nutrient loading and provide this information to watershed managers through NOAA's GLB Tipping Points Planner. SENSMap individually maps nutrient point sources and six non-point sources: 1) atmospheric deposition, 2) septic tanks, 3) non-agricultural chemical fertilizer, 4) agricultural chemical fertilizer, 5) manure, and 6) nitrogen fixation from legumes. To model source-specific loads at high resolution, SENSMap synthesizes a wide range of remotely sensed, surveyed, and tabular data. Using these spatially explicit nutrient loading maps, we can better calibrate local land use-based water quality models and provide insight to watershed managers on how to focus nutrient reduction strategies. Here we examine differences in dominant nutrient sources across the GLB, and how those sources vary by land use. SENSMap's high resolution, source-specific approach offers a different lens to understand nutrient loading than traditional semi-distributed or land use based models.

  13. Monitoring trends in bird populations: addressing background levels of annual variability in counts

    Treesearch

    Jared Verner; Kathryn L. Purcell; Jennifer G. Turner

    1996-01-01

    Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...

  14. Reproducibility of Interferon Gamma (IFN-γ) Release Assays. A Systematic Review

    PubMed Central

    Tagmouti, Saloua; Slater, Madeline; Benedetti, Andrea; Kik, Sandra V.; Banaei, Niaz; Cattamanchi, Adithya; Metcalfe, John; Dowdy, David; van Zyl Smit, Richard; Dendukuri, Nandini

    2014-01-01

    Rationale: Interferon gamma (IFN-γ) release assays for latent tuberculosis infection result in a larger-than-expected number of conversions and reversions in occupational screening programs, and reproducibility of test results is a concern. Objectives: Knowledge of the relative contribution and extent of the individual sources of variability (immunological, preanalytical, or analytical) could help optimize testing protocols. Methods: We performed a systematic review of studies published by October 2013 on all potential sources of variability of commercial IFN-γ release assays (QuantiFERON-TB Gold In-Tube and T-SPOT.TB). The included studies assessed test variability under identical conditions and under different conditions (the latter both overall and stratified by individual sources of variability). Linear mixed effects models were used to estimate within-subject SD. Measurements and Main Results: We identified a total of 26 articles, including 7 studies analyzing variability under the same conditions, 10 studies analyzing variability with repeat testing over time under different conditions, and 19 studies reporting individual sources of variability. Most data were on QuantiFERON (only three studies on T-SPOT.TB). A considerable number of conversions and reversions were seen around the manufacturer-recommended cut-point. The estimated range of variability of IFN-γ response in QuantiFERON under identical conditions was ±0.47 IU/ml (coefficient of variation, 13%) and ±0.26 IU/ml (30%) for individuals with an initial IFN-γ response in the borderline range (0.25–0.80 IU/ml). The estimated range of variability in noncontrolled settings was substantially larger (±1.4 IU/ml; 60%). Blood volume inoculated into QuantiFERON tubes and preanalytic delay were identified as key sources of variability. Conclusions: This systematic review shows substantial variability with repeat IFN-γ release assays testing even under identical conditions, suggesting that reversions and conversions around the existing cut-point should be interpreted with caution. PMID:25188809

  15. Organic carbon sources and sinks in San Francisco Bay: variability induced by river flow

    USGS Publications Warehouse

    Jassby, Alan D.; Powell, T.M.; Cloern, James E.

    1993-01-01

    Sources and sinks of organic carbon for San Francisco Bay (California, USA) were estimated for 1980. Sources for the southern reach were dominated by phytoplankton and benthic microalgal production. River loading of organic matter was an additional important factor in the northern reach. Tidal marsh export and point sources played a secondary role. Autochthonous production in San Francisco Bay appears to be less than the mean for temperate-zone estuaries, primarily because turbidity limits microalgal production and the development of seagrass beds. Exchange between the Bay and Pacific Ocean plays an unknown but potentially important role in the organic carbon balance. Interannual variability in the organic carbon supply was assessed for Suisun Bay, a northern reach subembayment that provides habitat for important fish species (delta smelt Hypomesus transpacificus and larval striped bass Morone saxatilus). The total supply fluctuated by an order of magnitude; depending on the year, either autochthonous sources (phytoplankton production) or allochthonous sources (riverine loading) could be dominant. The primary cause of the year-to-year change was variability of freshwater inflows from the Sacramento and San Joaquin rivers, and its magnitude was much larger than long-term changes arising from marsh destruction and point source decreases. Although interannual variability of the total organic carbon supply could not be assessed for the southern reach, year-to-year changes in phytoplankton production were much smaller than in Suisun Bay, reflecting a relative lack of river influence.

  16. Controlling Continuous-Variable Quantum Key Distribution with Entanglement in the Middle Using Tunable Linear Optics Cloning Machines

    NASA Astrophysics Data System (ADS)

    Wu, Xiao Dong; Chen, Feng; Wu, Xiang Hua; Guo, Ying

    2017-02-01

    Continuous-variable quantum key distribution (CVQKD) can provide detection efficiency, as compared to discrete-variable quantum key distribution (DVQKD). In this paper, we demonstrate a controllable CVQKD with the entangled source in the middle, contrast to the traditional point-to-point CVQKD where the entanglement source is usually created by one honest party and the Gaussian noise added on the reference partner of the reconciliation is uncontrollable. In order to harmonize the additive noise that originates in the middle to resist the effect of malicious eavesdropper, we propose a controllable CVQKD protocol by performing a tunable linear optics cloning machine (LOCM) at one participant's side, say Alice. Simulation results show that we can achieve the optimal secret key rates by selecting the parameters of the tuned LOCM in the derived regions.

  17. LMC stellar X-ray sources observed with ROSAT. 1: X-ray data and search for optical counterparts

    NASA Technical Reports Server (NTRS)

    Schmidtke, P. C.; Cowley, A. P.; Frattare, L. M.; Mcgrath, T. K.

    1994-01-01

    Observations of Einstein Large Magellanic Cloud (LMC) X-ray point sources have been made with ROSAT's High-Resolution Imager to obtain accurate positions from which to search for optical counterparts. This paper is the first in a series reporting results of the ROSAT observations and subsequent optical observations. It includes the X-ray positions and fluxes, information about variability, optical finding charts for each source, a list of identified counterparts, and information about candidates which have been observed spectroscopically in each of the fields. Sixteen point sources were measured at a greater than 3 sigma level, while 15 other sources were either extended or less significant detections. About 50% of the sources are serendipitous detections (not found in previous surveys). More than half of the X-ray sources are variable. Sixteen of the sources have been optically identified or confirmed: six with foreground cool stars, four with Seyfert galaxies, two with signal-to-noise ratio (SNR) in the LMC, and four with peculiar hot LMC stars. Presumably the latter are all binaries, although only one (CAL 83) has been previously studied in detail.

  18. Sources and transport of phosphorus to rivers in California and adjacent states, U.S., as determined by SPARROW modeling

    USGS Publications Warehouse

    Domagalski, Joseph L.; Saleh, Dina

    2015-01-01

    The SPARROW (SPAtially Referenced Regression on Watershed attributes) model was used to simulate annual phosphorus loads and concentrations in unmonitored stream reaches in California, U.S., and portions of Nevada and Oregon. The model was calibrated using de-trended streamflow and phosphorus concentration data at 80 locations. The model explained 91% of the variability in loads and 51% of the variability in yields for a base year of 2002. Point sources, geological background, and cultivated land were significant sources. Variables used to explain delivery of phosphorus from land to water were precipitation and soil clay content. Aquatic loss of phosphorus was significant in streams of all sizes, with the greatest decay predicted in small- and intermediate-sized streams. Geological sources, including volcanic rocks and shales, were the principal control on concentrations and loads in many regions. Some localized formations such as the Monterey shale of southern California are important sources of phosphorus and may contribute to elevated stream concentrations. Many of the larger point source facilities were located in downstream areas, near the ocean, and do not affect inland streams except for a few locations. Large areas of cultivated land result in phosphorus load increases, but do not necessarily increase the loads above those of geological background in some cases because of local hydrology, which limits the potential of phosphorus transport from land to streams.

  19. A Variable Frequency, Mis-Match Tolerant, Inductive Plasma Source

    NASA Astrophysics Data System (ADS)

    Rogers, Anthony; Kirchner, Don; Skiff, Fred

    2014-10-01

    Presented here is a survey and analysis of an inductively coupled, magnetically confined, singly ionized Argon plasma generated by a square-wave, variable frequency plasma source. The helicon-style antenna is driven directly by the class ``D'' amplifier without matching network for increased efficiency while maintaining independent control of frequency and applied power at the feed point. The survey is compared to similar data taken using a traditional exciter--power amplifier--matching network source. Specifically, the flexibility of this plasma source in terms of the independent control of electron plasma temperature and density is discussed in comparison to traditional source arrangements. Supported by US DOE Grant DE-FG02-99ER54543.

  20. Compact range for variable-zone measurements

    DOEpatents

    Burnside, Walter D.; Rudduck, Roger C.; Yu, Jiunn S.

    1988-08-02

    A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector.

  1. Compact range for variable-zone measurements

    DOEpatents

    Burnside, Walter D.; Rudduck, Roger C.; Yu, Jiunn S.

    1988-01-01

    A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector.

  2. CosmoQuest Transient Tracker: Opensource Photometry & Astrometry software

    NASA Astrophysics Data System (ADS)

    Myers, Joseph L.; Lehan, Cory; Gay, Pamela; Richardson, Matthew; CosmoQuest Team

    2018-01-01

    CosmoQuest is moving from online citizen science, to observational astronomy with the creation of Transient Trackers. This open source software is designed to identify asteroids and other transient/variable objects in image sets. Transient Tracker’s features in final form will include: astrometric and photometric solutions, identification of moving/transient objects, identification of variable objects, and lightcurve analysis. In this poster we present our initial, v0.1 release and seek community input.This software builds on the existing NIH funded ImageJ libraries. Creation of this suite of opensource image manipulation routines is lead by Wayne Rasband and is released primarily under the MIT license. In this release, we are building on these libraries to add source identification for point / point-like sources, and to do astrometry. Our materials released under the Apache 2.0 license on github (http://github.com/CosmoQuestTeam) and documentation can be found at http://cosmoquest.org/TransientTracker.

  3. Compact range for variable-zone measurements

    DOEpatents

    Burnside, W.D.; Rudduck, R.C.; Yu, J.S.

    1987-02-27

    A compact range for testing antennas or radar targets includes a source for directing energy along a feedline toward a parabolic reflector. The reflected wave is a spherical wave with a radius dependent on the distance of the source from the focal point of the reflector. 2 figs.

  4. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  5. An Ultradeep Chandra Catalog of X-Ray Point Sources in the Galactic Center Star Cluster

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenlin; Li, Zhiyuan; Morris, Mark R.

    2018-04-01

    We present an updated catalog of X-ray point sources in the inner 500″ (∼20 pc) of the Galactic center (GC), where the nuclear star cluster (NSC) stands, based on a total of ∼4.5 Ms of Chandra observations taken from 1999 September to 2013 April. This ultradeep data set offers unprecedented sensitivity for detecting X-ray sources in the GC, down to an intrinsic 2–10 keV luminosity of 1.0 × 1031 erg s‑1. A total of 3619 sources are detected in the 2–8 keV band, among which ∼3500 are probable GC sources and ∼1300 are new identifications. The GC sources collectively account for ∼20% of the total 2–8 keV flux from the inner 250″ region where detection sensitivity is the greatest. Taking advantage of this unprecedented sample of faint X-ray sources that primarily traces the old stellar populations in the NSC, we revisit global source properties, including long-term variability, cumulative spectra, luminosity function, and spatial distribution. Based on the equivalent width and relative strength of the iron lines, we suggest that in addition to the arguably predominant population of magnetic cataclysmic variables (CVs), nonmagnetic CVs contribute substantially to the detected sources, especially in the lower-luminosity group. On the other hand, the X-ray sources have a radial distribution closely following the stellar mass distribution in the NSC, but much flatter than that of the known X-ray transients, which are presumably low-mass X-ray binaries (LMXBs) caught in outburst. This, together with the very modest long-term variability of the detected sources, strongly suggests that quiescent LMXBs are a minor (less than a few percent) population.

  6. Exploring the Variability of the Fermi LAT Blazar Population

    NASA Astrophysics Data System (ADS)

    Macomb, Daryl J.; Shrader, C. R.

    2014-01-01

    The flux variability of the approximately 2000 point sources cataloged by the Fermi Gamma-Ray Space Telescope provide important clues to population characteristics. This is particularly true of the more than 1100 source that are likely AGN. By characterizing the intrinsic flux variability and distinguishing this variability from flaring behavior, we can better address questions of flare amplitudes, durations, recurrence times, and temporal profiles. A better understanding of the responsible physical environments, such as the scale and location of jet structures responsible for the high-energy emission, may emerge from such studies. Assessing these characteristics as a function of blazar sub-class is a further goal in order to address questions about the fundamentals of blazar AGN physics. Here we report on progress made in categorizing blazar flare behavior, and correlate these behaviors with blazar sub-type and other source parameters.

  7. The VLITE Post-Processing Pipeline

    NASA Astrophysics Data System (ADS)

    Richards, Emily E.; Clarke, Tracy; Peters, Wendy; Polisensky, Emil; Kassim, Namir E.

    2018-01-01

    A post-processing pipeline to adaptively extract and catalog point sources is being developed to enhance the scientific value and accessibility of data products generated by the VLA Low-band Ionosphere and Transient Experiment (VLITE; ) on the Karl G. Jansky Very Large Array (VLA). In contrast to other radio sky surveys, the commensal observing mode of VLITE results in varying depths, sensitivities, and spatial resolutions across the sky based on the configuration of the VLA, location on the sky, and time on source specified by the primary observer for their independent science objectives. Therefore, previously developed tools and methods for generating source catalogs and survey statistics are not always appropriate for VLITE's diverse and growing set of data. A raw catalog of point sources extracted from every VLITE image will be created from source fit parameters stored in a queryable database. Point sources will be measured using the Python Blob Detector and Source Finder software (PyBDSF; Mohan & Rafferty 2015). Sources in the raw catalog will be associated with previous VLITE detections in a resolution- and sensitivity-dependent manner, and cross-matched to other radio sky surveys to aid in the detection of transient and variable sources. Final data products will include separate, tiered point source catalogs grouped by sensitivity limit and spatial resolution.

  8. Stochastic sensitivity analysis of nitrogen pollution to climate change in a river basin with complex pollution sources.

    PubMed

    Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing

    2017-12-01

    It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.

  9. Characterizing the size distribution of particles in urban stormwater by use of fixed-point sample-collection methods

    USGS Publications Warehouse

    Selbig, William R.; Bannerman, Roger T.

    2011-01-01

    The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.

  10. Color Variabilities of Spectrally Defined Red QSOs at z = 0.3–1.2

    NASA Astrophysics Data System (ADS)

    Chen, I.-Chenn; Hwang, Chorng-Yuan; Kaiser, Nick; Magnier, Eugene A.; Metcalfe, Nigel; Waters, Christopher

    2018-03-01

    We study the brightness and the color variabilities of 34 red and 122 typical quasi-stellar objects (QSOs) at z = 0.3–1.2 using data from the Pan-STARRS Medium Deep Survey. The red and the typical QSOs are selected based on the ratios of the flux densities at 3000 Å to those at 4000 Å in the rest frame. We find that 16 out of 34 red QSOs are identified as extended sources, which exhibit strong brightness and color variabilities at shorter wavelengths due to the contamination of the emission from their host galaxies. Some point-like QSOs with significant color variabilities are able to change their color classification according to our spectral definition. The timescales of the color variabilities for these point-like QSOs are within 4 years, suggesting that the size scales of the mechanisms producing the color variabilities are less than a few light years. The spectra of some extended and point-like red QSOs can be well fitted with the dust-reddened spectra of a typical QSO, while others are difficult to explain with dust reddening.

  11. Oxidative potential and inflammatory impacts of source apportioned ambient air pollution in Beijing.

    PubMed

    Liu, Qingyang; Baumgartner, Jill; Zhang, Yuanxun; Liu, Yanju; Sun, Yongjun; Zhang, Meigen

    2014-11-04

    Air pollution exposure is associated with a range of adverse health impacts. Knowledge of the chemical components and sources of air pollution most responsible for these health effects could lead to an improved understanding of the mechanisms of such effects and more targeted risk reduction strategies. We measured daily ambient fine particulate matter (<2.5 μm in aerodynamic diameter; PM2.5) for 2 months in peri-urban and central Beijing, and assessed the contribution of its chemical components to the oxidative potential of ambient air pollution using the dithiothreitol (DTT) assay. The composition data were applied to a multivariate source apportionment model to determine the PM contributions of six sources or factors: a zinc factor, an aluminum factor, a lead point factor, a secondary source (e.g., SO4(2-), NO3(2-)), an iron source, and a soil dust source. Finally, we assessed the relationship between reactive oxygen species (ROS) activity-related PM sources and inflammatory responses in human bronchial epithelial cells. In peri-urban Beijing, the soil dust source accounted for the largest fraction (47%) of measured ROS variability. In central Beijing, a secondary source explained the greatest fraction (29%) of measured ROS variability. The ROS activities of PM collected in central Beijing were exponentially associated with in vivo inflammatory responses in epithelial cells (R2=0.65-0.89). We also observed a high correlation between three ROS-related PM sources (a lead point factor, a zinc factor, and a secondary source) and expression of an inflammatory marker (r=0.45-0.80). Our results suggest large differences in the contribution of different PM sources to ROS variability at the central versus peri-urban study sites in Beijing and that secondary sources may play an important role in PM2.5-related oxidative potential and inflammatory health impacts.

  12. The Wilkinson Microwave Anisotropy Probe (WMAP) Source Catalog

    NASA Technical Reports Server (NTRS)

    Wright, E.L.; Chen, X.; Odegard, N.; Bennett, C.L.; Hill, R.S.; Hinshaw, G.; Jarosik, N.; Komatsu, E.; Nolta, M.R.; Page, L.; hide

    2008-01-01

    We present the list of point sources found in the WMAP 5-year maps. The technique used in the first-year and three-year analysis now finds 390 point sources, and the five-year source catalog is complete for regions of the sky away from the galactic plane to a 2 Jy limit, with SNR greater than 4.7 in all bands in the least covered parts of the sky. The noise at high frequencies is still mainly radiometer noise, but at low frequencies the CMB anisotropy is the largest uncertainty. A separate search of CMB-free V-W maps finds 99 sources of which all but one can be identified with known radio sources. The sources seen by WMAP are not strongly polarized. Many of the WMAP sources show significant variability from year to year, with more than a 2:l range between the minimum and maximum fluxes.

  13. A novel molecular index for secondary oil migration distance

    PubMed Central

    Zhang, Liuping; Li, Maowen; Wang, Yang; Yin, Qing-Zhu; Zhang, Wenzheng

    2013-01-01

    Determining oil migration distances from source rocks to reservoirs can greatly help in the search for new petroleum accumulations. Concentrations and ratios of polar organic compounds are known to change due to preferential sorption of these compounds in migrating oils onto immobile mineral surfaces. However, these compounds cannot be directly used as proxies for oil migration distances because of the influence of source variability. Here we show that for each source facies, the ratio of the concentration of a select polar organic compound to its initial concentration at a reference point is independent of source variability and correlates solely with migration distance from source rock to reservoir. Case studies serve to demonstrate that this new index provides a valid solution for determining source-reservoir distance and could lead to many applications in fundamental and applied petroleum geoscience studies. PMID:23965930

  14. Detection of spatial fluctuations of non-point source fecal pollution in coral reef surrounding waters in southwestern Puerto Rico using PCR-based assays.

    PubMed

    Bonkosky, M; Hernández-Delgado, E A; Sandoz, B; Robledo, I E; Norat-Ramírez, J; Mattei, H

    2009-01-01

    Human fecal contamination of coral reefs is a major cause of concern. Conventional methods used to monitor microbial water quality cannot be used to discriminate between different fecal pollution sources. Fecal coliforms, enterococci, and human-specific Bacteroides (HF183, HF134), general Bacteroides-Prevotella (GB32), and Clostridium coccoides group (CP) 16S rDNA PCR assays were used to test for the presence of non-point source fecal contamination across the southwestern Puerto Rico shelf. Inshore waters were highly turbid, consistently receiving fecal pollution from variable sources, and showing the highest frequency of positive molecular marker signals. Signals were also detected at offshore waters in compliance with existing microbiological quality regulations. Phylogenetic analysis showed that most isolates were of human fecal origin. The geographic extent of non-point source fecal pollution was large and impacted extensive coral reef systems. This could have deleterious long-term impacts on public health, local fisheries and in tourism potential if not adequately addressed.

  15. Performance improvement of continuous-variable quantum key distribution with an entangled source in the middle via photon subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Liao, Qin; Wang, Yijun; Huang, Duan; Huang, Peng; Zeng, Guihua

    2017-03-01

    A suitable photon-subtraction operation can be exploited to improve the maximal transmission of continuous-variable quantum key distribution (CVQKD) in point-to-point quantum communication. Unfortunately, the photon-subtraction operation faces solving the improvement transmission problem of practical quantum networks, where the entangled source is located in the third part, which may be controlled by a malicious eavesdropper, instead of in one of the trusted parts, controlled by Alice or Bob. In this paper, we show that a solution can come from using a non-Gaussian operation, in particular, the photon-subtraction operation, which provides a method to enhance the performance of entanglement-based (EB) CVQKD. Photon subtraction not only can lengthen the maximal transmission distance by increasing the signal-to-noise rate but also can be easily implemented with existing technologies. Security analysis shows that CVQKD with an entangled source in the middle (ESIM) from applying photon subtraction can well increase the secure transmission distance in both direct and reverse reconciliations of the EB-CVQKD scheme, even if the entangled source originates from an untrusted part. Moreover, it can defend against the inner-source attack, which is a specific attack by an untrusted entangled source in the framework of ESIM.

  16. Point source pollution and variability of nitrate concentrations in water from shallow aquifers

    NASA Astrophysics Data System (ADS)

    Nemčić-Jurec, Jasna; Jazbec, Anamarija

    2017-06-01

    Agriculture is one of the several major sources of nitrate pollution, and therefore the EU Nitrate Directive, designed to decrease pollution, has been implemented. Point sources like septic systems and broken sewage systems also contribute to water pollution. Pollution of groundwater by nitrate from 19 shallow wells was studied in a typical agricultural region, middle Podravina, in northwest Croatia. The concentration of nitrate ranged from <0.1 to 367 mg/l in water from wells, and 29.8 % of 253 total samples were above maximum acceptable value of 50 mg/l (MAV). Among regions R1-R6, there was no statistically significant difference in nitrate concentrations ( F = 1.98; p = 0.15) during the years 2002-2007. Average concentrations of nitrate in all 19 wells for all the analyzed years were between recommended limit value of 25 mg/l (RLV) and MAV except in 2002 (concentration was under RLV). The results of the repeated measures ANOVA showed statistically significant differences between the wells at the point source distance (proximity) of <10 m, compared to the wells at the point source distance of >20 m ( F = 10.6; p < 0.001). Average annual concentrations of nitrate during the years studied are not statistically different, but interaction between proximity and years is statistically significant ( F = 2.07; p = 0.04). Results of k-means clustering confirmed division into four clusters according to the pollution. Principal component analysis showed that there is only one significant factor, proximity, which explains 91.6 % of the total variability of nitrate. Differences in water quality were found as a result of different environmental factors. These results will contribute to the implementation of the Nitrate Directive in Croatia and the EU.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, Sean Gregory; Berryman, Judy; Shackley, M. Steven

    Eden projectile points associated with the Cody complex are underrepresented in the late Paleoindian record of the American Southwest. EDXRF analysis of an obsidian Eden point from a site in Sierra County, New Mexico demonstrates this artifact is from the Cerro del Medio (Valles Rhyolite) source in the Jemez Mountains. Lastly, we contextualize our results by examining variability in obsidian procurement practices beyond the Cody heartland in southcentral New Mexico.

  18. A multi-model approach to monitor emissions of CO2 and CO from an urban-industrial complex

    NASA Astrophysics Data System (ADS)

    Super, Ingrid; Denier van der Gon, Hugo A. C.; van der Molen, Michiel K.; Sterk, Hendrika A. M.; Hensen, Arjan; Peters, Wouter

    2017-11-01

    Monitoring urban-industrial emissions is often challenging because observations are scarce and regional atmospheric transport models are too coarse to represent the high spatiotemporal variability in the resulting concentrations. In this paper we apply a new combination of an Eulerian model (Weather Research and Forecast, WRF, with chemistry) and a Gaussian plume model (Operational Priority Substances - OPS). The modelled mixing ratios are compared to observed CO2 and CO mole fractions at four sites along a transect from an urban-industrial complex (Rotterdam, the Netherlands) towards rural conditions for October-December 2014. Urban plumes are well-mixed at our semi-urban location, making this location suited for an integrated emission estimate over the whole study area. The signals at our urban measurement site (with average enhancements of 11 ppm CO2 and 40 ppb CO over the baseline) are highly variable due to the presence of distinct source areas dominated by road traffic/residential heating emissions or industrial activities. This causes different emission signatures that are translated into a large variability in observed ΔCO : ΔCO2 ratios, which can be used to identify dominant source types. We find that WRF-Chem is able to represent synoptic variability in CO2 and CO (e.g. the median CO2 mixing ratio is 9.7 ppm, observed, against 8.8 ppm, modelled), but it fails to reproduce the hourly variability of daytime urban plumes at the urban site (R2 up to 0.05). For the urban site, adding a plume model to the model framework is beneficial to adequately represent plume transport especially from stack emissions. The explained variance in hourly, daytime CO2 enhancements from point source emissions increases from 30 % with WRF-Chem to 52 % with WRF-Chem in combination with the most detailed OPS simulation. The simulated variability in ΔCO :  ΔCO2 ratios decreases drastically from 1.5 to 0.6 ppb ppm-1, which agrees better with the observed standard deviation of 0.4 ppb ppm-1. This is partly due to improved wind fields (increase in R2 of 0.10) but also due to improved point source representation (increase in R2 of 0.05) and dilution (increase in R2 of 0.07). Based on our analysis we conclude that a plume model with detailed and accurate dispersion parameters adds substantially to top-down monitoring of greenhouse gas emissions in urban environments with large point source contributions within a ˜ 10 km radius from the observation sites.

  19. Plasticity, Variability and Age in Second Language Acquisition and Bilingualism

    PubMed Central

    Birdsong, David

    2018-01-01

    Much of what is known about the outcome of second language acquisition and bilingualism can be summarized in terms of inter-individual variability, plasticity and age. The present review looks at variability and plasticity with respect to their underlying sources, and at age as a modulating factor in variability and plasticity. In this context we consider critical period effects vs. bilingualism effects, early and late bilingualism, nativelike and non-nativelike L2 attainment, cognitive aging, individual differences in learning, and linguistic dominance in bilingualism. Non-uniformity is an inherent characteristic of both early and late bilingualism. This review shows how plasticity and age connect with biological and experiential sources of variability, and underscores the value of research that reveals and explains variability. In these ways the review suggests how plasticity, variability and age conspire to frame fundamental research issues in L2 acquisition and bilingualism, and provides points of reference for discussion of the present Frontiers in Psychology Research Topic. PMID:29593590

  20. Examining the infrared variable star population discovered in the Small Magellanic Cloud using the SAGE-SMC survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polsdofer, Elizabeth; Marengo, M.; Seale, J.

    2015-02-01

    We present our study on the infrared variability of point sources in the Small Magellanic Cloud (SMC). We use the data from the Spitzer Space Telescope Legacy Program “Surveying the Agents of Galaxy Evolution in the Tidally Stripped, Low Metallicity Small Magellanic Cloud” (SAGE-SMC) and the “Spitzer Survey of the Small Magellanic Cloud” (S{sup 3}MC) survey, over three different epochs, separated by several months to 3 years. Variability in the thermal infrared is identified using a combination of Spitzer’s InfraRed Array Camera 3.6, 4.5, 5.8, and 8.0 μm bands, and the Multiband Imaging Photometer for Spitzer 24 μm band. Anmore » error-weighted flux difference between each pair of three epochs (“variability index”) is used to assess the variability of each source. A visual source inspection is used to validate the photometry and image quality. Out of ∼2 million sources in the SAGE-SMC catalog, 814 meet our variability criteria. We matched the list of variable star candidates to the catalogs of SMC sources classified with other methods, available in the literature. Carbon-rich Asymptotic Giant Branch (AGB) stars make up the majority (61%) of our variable sources, with about a third of all of our sources being classified as extreme AGB stars. We find a small, but significant population of oxygen-rich (O-rich) AGB (8.6%), Red Supergiant (2.8%), and Red Giant Branch (<1%) stars. Other matches to the literature include Cepheid variable stars (8.6%), early type stars (2.8%), Young-stellar objects (5.8%), and background galaxies (1.2%). We found a candidate OH maser star, SSTISAGE1C J005212.88-730852.8, which is a variable O-rich AGB star, and would be the first OH/IR star in the SMC, if confirmed. We measured the infrared variability of a rare RV Tau variable (a post-AGB star) that has recently left the AGB phase. 59 variable stars from our list remain unclassified.« less

  1. Kernel-PCA data integration with enhanced interpretability

    PubMed Central

    2014-01-01

    Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747

  2. Second ROSAT all-sky survey (2RXS) source catalogue

    NASA Astrophysics Data System (ADS)

    Boller, Th.; Freyberg, M. J.; Trümper, J.; Haberl, F.; Voges, W.; Nandra, K.

    2016-04-01

    Aims: We present the second ROSAT all-sky survey source catalogue, hereafter referred to as the 2RXS catalogue. This is the second publicly released ROSAT catalogue of point-like sources obtained from the ROSAT all-sky survey (RASS) observations performed with the position-sensitive proportional counter (PSPC) between June 1990 and August 1991, and is an extended and revised version of the bright and faint source catalogues. Methods: We used the latest version of the RASS processing to produce overlapping X-ray images of 6.4° × 6.4° sky regions. To create a source catalogue, a likelihood-based detection algorithm was applied to these, which accounts for the variable point-spread function (PSF) across the PSPC field of view. Improvements in the background determination compared to 1RXS were also implemented. X-ray control images showing the source and background extraction regions were generated, which were visually inspected. Simulations were performed to assess the spurious source content of the 2RXS catalogue. X-ray spectra and light curves were extracted for the 2RXS sources, with spectral and variability parameters derived from these products. Results: We obtained about 135 000 X-ray detections in the 0.1-2.4 keV energy band down to a likelihood threshold of 6.5, as adopted in the 1RXS faint source catalogue. Our simulations show that the expected spurious content of the catalogue is a strong function of detection likelihood, and the full catalogue is expected to contain about 30% spurious detections. A more conservative likelihood threshold of 9, on the other hand, yields about 71 000 detections with a 5% spurious fraction. We recommend thresholds appropriate to the scientific application. X-ray images and overlaid X-ray contour lines provide an additional user product to evaluate the detections visually, and we performed our own visual inspections to flag uncertain detections. Intra-day variability in the X-ray light curves was quantified based on the normalised excess variance and a maximum amplitude variability analysis. X-ray spectral fits were performed using three basic models, a power law, a thermal plasma emission model, and black-body emission. Thirty-two large extended regions with diffuse emission and embedded point sources were identified and excluded from the present analysis. Conclusions: The 2RXS catalogue provides the deepest and cleanest X-ray all-sky survey catalogue in advance of eROSITA. The catalogue is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/588/A103

  3. Developing Real-Time Emissions Estimates for Enhanced Air Quality Forecasting

    EPA Science Inventory

    Exploring the relationship between ambient temperature, energy demand, and electric generating unit point source emissions and potential techniques for incorporating real-time information on the modulating effects of these variables using the Mid-Atlantic/Northeast Visibility Uni...

  4. Chandra reveals a black hole X-ray binary within the ultraluminous supernova remnant MF 16

    NASA Astrophysics Data System (ADS)

    Roberts, T. P.; Colbert, E. J. M.

    2003-06-01

    We present evidence, based on Chandra ACIS-S observations of the nearby spiral galaxy NGC 6946, that the extraordinary X-ray luminosity of the MF 16 supernova remnant actually arises in a black hole X-ray binary. This conclusion is drawn from the point-like nature of the X-ray source, its X-ray spectrum closely resembling the spectrum of other ultraluminous X-ray sources thought to be black hole X-ray binary systems, and the detection of rapid hard X-ray variability from the source. We briefly discuss the nature of the hard X-ray variability, and the origin of the extreme radio and optical luminosity of MF 16 in light of this identification.

  5. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  6. Image reduction pipeline for the detection of variable sources in highly crowded fields

    NASA Astrophysics Data System (ADS)

    Gössl, C. A.; Riffeser, A.

    2002-01-01

    We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.

  7. Deciphering Sources of Variability in Clinical Pathology.

    PubMed

    Tripathi, Niraj K; Everds, Nancy E; Schultze, A Eric; Irizarry, Armando R; Hall, Robert L; Provencher, Anne; Aulbach, Adam

    2017-01-01

    The objectives of this session were to explore causes of variability in clinical pathology data due to preanalytical and analytical variables as well as study design and other procedures that occur in toxicity testing studies. The presenters highlighted challenges associated with such variability in differentiating test article-related effects from the effects of experimental procedures and its impact on overall data interpretation. These presentations focused on preanalytical and analytical variables and study design-related factors and their influence on clinical pathology data, and the importance of various factors that influence data interpretation including statistical analysis and reference intervals. Overall, these presentations touched upon potential effect of many variables on clinical pathology parameters, including animal physiology, sample collection process, specimen handling and analysis, study design, and some discussion points on how to manage those variables to ensure accurate interpretation of clinical pathology data in toxicity studies. This article is a brief synopsis of presentations given in a session entitled "Deciphering Sources of Variability in Clinical Pathology-It's Not Just about the Numbers" that occurred at the 35th Annual Symposium of the Society of Toxicologic Pathology in San Diego, California.

  8. HIGHLY VARIABLE YOUNG MASSIVE STARS IN ATLASGAL CLUMPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, M. S. N.; Contreras Peña, C.; Lucas, P. W.

    High-amplitude variability in young stellar objects (YSOs) is usually associated with episodic accretion events. It has not been observed so far in massive YSOs. Here, the high-amplitude variable star sample of Contreras Peña et al. has been used to search for highly variable (Δ K  ≥ 1 mag) sources coinciding with dense clumps mapped using the 850  μ m continuum emission by the ATLASGAL survey. A total of 18 variable sources are centered on the submillimeter clump peaks and coincide (<1″) with a 24  μ m point or compact (<10″) source. Of these 18 sources, 13 can be fit by YSOmore » models. The 13 variable YSOs (VYSOs) have luminosities of ∼10{sup 3} L {sub ⊙}, an average mass of 8  M {sub ⊙}, and a range of ages up to 10{sup 6} yr. A total of 11 of these 13 VYSOs are located in the midst of infrared dark clouds. Nine of the 13 sources have Δ K  > 2 mag, significantly higher compared to the mean variability of the entire VVV sample. The light curves of these objects sampled between 2010 and 2015 display rising, declining, or quasi-periodic behavior but no clear periodicity. Light-curve analysis using the Plavchan method shows that the most prominent phased signals have periods of a few hundred days. The nature and timescale of variations found in 6.7 Ghz methanol maser emission in massive stars are similar to that of the VYSO light curves. We argue that the origin of the observed variability is episodic accretion. We suggest that the timescale of a few hundred days may represent the frequency at which a spiraling disk feeds dense gas to the young massive star.« less

  9. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  10. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh.

    PubMed

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.

  11. Sampling Singular and Aggregate Point Sources of Carbon Dioxide from Space Using OCO-2

    NASA Astrophysics Data System (ADS)

    Schwandner, F. M.; Gunson, M. R.; Eldering, A.; Miller, C. E.; Nguyen, H.; Osterman, G. B.; Taylor, T.; O'Dell, C.; Carn, S. A.; Kahn, B. H.; Verhulst, K. R.; Crisp, D.; Pieri, D. C.; Linick, J.; Yuen, K.; Sanchez, R. M.; Ashok, M.

    2016-12-01

    Anthropogenic carbon dioxide (CO2) sources increasingly tip the natural balance between natural carbon sources and sinks. Space-borne measurements offer opportunities to detect and analyze point source emission signals anywhere on Earth. Singular continuous point source plumes from power plants or volcanoes turbulently mix into their proximal background fields. In contrast, plumes of aggregate point sources such as cities, and transportation or fossil fuel distribution networks, mix into each other and may therefore result in broader and more persistent excess signals of total column averaged CO2 (XCO2). NASA's first satellite dedicated to atmospheric CO2observation, the Orbiting Carbon Observatory-2 (OCO-2), launched in July 2014 and now leads the afternoon constellation of satellites (A-Train). While continuously collecting measurements in eight footprints across a narrow ( < 10 km) wide swath it occasionally cross-cuts coincident emission plumes. For singular point sources like volcanoes and coal fired power plants, we have developed OCO-2 data discovery tools and a proxy detection method for plumes using SO2-sensitive TIR imaging data (ASTER). This approach offers a path toward automating plume detections with subsequent matching and mining of OCO-2 data. We found several distinct singular source CO2signals. For aggregate point sources, we investigated whether OCO-2's multi-sounding swath observing geometry can reveal intra-urban spatial emission structures in the observed variability of XCO2 data. OCO-2 data demonstrate that we can detect localized excess XCO2 signals of 2 to 6 ppm against suburban and rural backgrounds. Compared to single-shot GOSAT soundings which detected urban/rural XCO2differences in megacities (Kort et al., 2012), the OCO-2 swath geometry opens up the path to future capabilities enabling urban characterization of greenhouse gases using hundreds of soundings over a city at each satellite overpass. California Institute of Technology

  12. Response of non-point source pollutant loads to climate change in the Shitoukoumen reservoir catchment.

    PubMed

    Zhang, Lei; Lu, Wenxi; An, Yonglei; Li, Di; Gong, Lei

    2012-01-01

    The impacts of climate change on streamflow and non-point source pollutant loads in the Shitoukoumen reservoir catchment are predicted by combining a general circulation model (HadCM3) with the Soil and Water Assessment Tool (SWAT) hydrological model. A statistical downscaling model was used to generate future local scenarios of meteorological variables such as temperature and precipitation. Then, the downscaled meteorological variables were used as input to the SWAT hydrological model calibrated and validated with observations, and the corresponding changes of future streamflow and non-point source pollutant loads in Shitoukoumen reservoir catchment were simulated and analyzed. Results show that daily temperature increases in three future periods (2010-2039, 2040-2069, and 2070-2099) relative to a baseline of 1961-1990, and the rate of increase is 0.63°C per decade. Annual precipitation also shows an apparent increase of 11 mm per decade. The calibration and validation results showed that the SWAT model was able to simulate well the streamflow and non-point source pollutant loads, with a coefficient of determination of 0.7 and a Nash-Sutcliffe efficiency of about 0.7 for both the calibration and validation periods. The future climate change has a significant impact on streamflow and non-point source pollutant loads. The annual streamflow shows a fluctuating upward trend from 2010 to 2099, with an increase rate of 1.1 m(3) s(-1) per decade, and a significant upward trend in summer, with an increase rate of 1.32 m(3) s(-1) per decade. The increase in summer contributes the most to the increase of annual load compared with other seasons. The annual NH (4) (+) -N load into Shitoukoumen reservoir shows a significant downward trend with a decrease rate of 40.6 t per decade. The annual TP load shows an insignificant increasing trend, and its change rate is 3.77 t per decade. The results of this analysis provide a scientific basis for effective support of decision makers and strategies of adaptation to climate change.

  13. Relation of watershed setting and stream nutrient yields at selected sites in central and eastern North Carolina, 1997-2008

    USGS Publications Warehouse

    Harden, Stephen L.; Cuffney, Thomas F.; Terziotti, Silvia; Kolb, Katharine R.

    2013-01-01

    Data collected between 1997 and 2008 at 48 stream sites were used to characterize relations between watershed settings and stream nutrient yields throughout central and eastern North Carolina. The focus of the investigation was to identify environmental variables in watersheds that influence nutrient export for supporting the development and prioritization of management strategies for restoring nutrient-impaired streams. Nutrient concentration data and streamflow data compiled for the 1997 to 2008 study period were used to compute stream yields of nitrate, total nitrogen (N), and total phosphorus (P) for each study site. Compiled environmental data (including variables for land cover, hydrologic soil groups, base-flow index, streams, wastewater treatment facilities, and concentrated animal feeding operations) were used to characterize the watershed settings for the study sites. Data for the environmental variables were analyzed in combination with the stream nutrient yields to explore relations based on watershed characteristics and to evaluate whether particular variables were useful indicators of watersheds having relatively higher or lower potential for exporting nutrients. Data evaluations included an examination of median annual nutrient yields based on a watershed land-use classification scheme developed as part of the study. An initial examination of the data indicated that the highest median annual nutrient yields occurred at both agricultural and urban sites, especially for urban sites having large percentages of point-source flow contributions to the streams. The results of statistical testing identified significant differences in annual nutrient yields when sites were analyzed on the basis of watershed land-use category. When statistical differences in median annual yields were noted, the results for nitrate, total N, and total P were similar in that highly urbanized watersheds (greater than 30 percent developed land use) and (or) watersheds with greater than 10 percent point-source flow contributions to streamflow had higher yields relative to undeveloped watersheds (having less than 10 and 15 percent developed and agricultural land uses, respectively) and watersheds with relatively low agricultural land use (between 15 and 30 percent). The statistical tests further indicated that the median annual yields for total P were statistically higher for watersheds with high agricultural land use (greater than 30 percent) compared to the undeveloped watersheds and watersheds with low agricultural land use. The total P yields also were higher for watersheds with low urban land use (between 10 and 30 percent developed land) compared to the undeveloped watersheds. The study data indicate that grouping and examining stream nutrient yields based on the land-use classifications used in this report can be useful for characterizing relations between watershed settings and nutrient yields in streams located throughout central and eastern North Carolina. Compiled study data also were analyzed with four regression tree models as a means of determining which watershed environmental variables or combination of variables result in basins that are likely to have high or low nutrient yields. The regression tree analyses indicated that some of the environmental variables examined in this study were useful for predicting yields of nitrate, total N, and total P. When the median annual nutrient yields for all 48 sites were evaluated as a group (Model 1), annual point-source flow yields had the greatest influence on nitrate and total N yields observed in streams, and annual streamflow yields had the greatest influence on yields of total P. The Model 1 results indicated that watersheds with higher annual point-source flow yields had higher annual yields of nitrate and total N, and watersheds with higher annual streamflow yields had higher annual yields of total P. When sites with high point-source flows (greater than 10 percent of total streamflow) were excluded from the regression tree analyses (Models 2–4), the percentage of forested land in the watersheds was identified as the primary environmental variable influencing stream yields for both total N and total P. Models 2, 3 and 4 did not identify any watershed environmental variables that could adequately explain the observed variability in the nitrate yields among the set of sites examined by each of these models. The results for Models 2, 3, and 4 indicated that watersheds with higher percentages of forested land had lower annual total N and total P yields compared to watersheds with lower percentages of forested land, which had higher median annual total N and total P yields. Additional environmental variables determined to further influence the stream nutrient yields included median annual percentage of point-source flow contributions to the streams, variables of land cover (percentage of forested land, agricultural land, and (or) forested land plus wetlands) in the watershed and (or) in the stream buffer, and drainage area. The regression tree models can serve as a tool for relating differences in select watershed attributes to differences in stream yields of nitrate, total N, and total P, which can provide beneficial information for improving nutrient management in streams throughout North Carolina and for reducing nutrient loads to coastal waters.

  14. Evaluating Air-Quality Models: Review and Outlook.

    NASA Astrophysics Data System (ADS)

    Weil, J. C.; Sykes, R. I.; Venkatram, A.

    1992-10-01

    Over the past decade, much attention has been devoted to the evaluation of air-quality models with emphasis on model performance in predicting the high concentrations that are important in air-quality regulations. This paper stems from our belief that this practice needs to be expanded to 1) evaluate model physics and 2) deal with the large natural or stochastic variability in concentration. The variability is represented by the root-mean- square fluctuating concentration (c about the mean concentration (C) over an ensemble-a given set of meteorological, source, etc. conditions. Most air-quality models used in applications predict C, whereas observations are individual realizations drawn from an ensemble. For cC large residuals exist between predicted and observed concentrations, which confuse model evaluations.This paper addresses ways of evaluating model physics in light of the large c the focus is on elevated point-source models. Evaluation of model physics requires the separation of the mean model error-the difference between the predicted and observed C-from the natural variability. A residual analysis is shown to be an elective way of doing this. Several examples demonstrate the usefulness of residuals as well as correlation analyses and laboratory data in judging model physics.In general, c models and predictions of the probability distribution of the fluctuating concentration (c), (c, are in the developmental stage, with laboratory data playing an important role. Laboratory data from point-source plumes in a convection tank show that (c approximates a self-similar distribution along the plume center plane, a useful result in a residual analysis. At pmsent,there is one model-ARAP-that predicts C, c, and (c for point-source plumes. This model is more computationally demanding than other dispersion models (for C only) and must be demonstrated as a practical tool. However, it predicts an important quantity for applications- the uncertainty in the very high and infrequent concentrations. The uncertainty is large and is needed in evaluating operational performance and in predicting the attainment of air-quality standards.

  15. Wind-tunnel Modelling of Dispersion from a Scalar Area Source in Urban-Like Roughness

    NASA Astrophysics Data System (ADS)

    Pascheke, Frauke; Barlow, Janet F.; Robins, Alan

    2008-01-01

    A wind-tunnel study was conducted to investigate ventilation of scalars from urban-like geometries at neighbourhood scale by exploring two different geometries a uniform height roughness and a non-uniform height roughness, both with an equal plan and frontal density of λ p = λ f = 25%. In both configurations a sub-unit of the idealized urban surface was coated with a thin layer of naphthalene to represent area sources. The naphthalene sublimation method was used to measure directly total area-averaged transport of scalars out of the complex geometries. At the same time, naphthalene vapour concentrations controlled by the turbulent fluxes were detected using a fast Flame Ionisation Detection (FID) technique. This paper describes the novel use of a naphthalene coated surface as an area source in dispersion studies. Particular emphasis was also given to testing whether the concentration measurements were independent of Reynolds number. For low wind speeds, transfer from the naphthalene surface is determined by a combination of forced and natural convection. Compared with a propane point source release, a 25% higher free stream velocity was needed for the naphthalene area source to yield Reynolds-number-independent concentration fields. Ventilation transfer coefficients w T / U derived from the naphthalene sublimation method showed that, whilst there was enhanced vertical momentum exchange due to obstacle height variability, advection was reduced and dispersion from the source area was not enhanced. Thus, the height variability of a canopy is an important parameter when generalising urban dispersion. Fine resolution concentration measurements in the canopy showed the effect of height variability on dispersion at street scale. Rapid vertical transport in the wake of individual high-rise obstacles was found to generate elevated point-like sources. A Gaussian plume model was used to analyse differences in the downstream plumes. Intensified lateral and vertical plume spread and plume dilution with height was found for the non-uniform height roughness.

  16. Screening and validation of EXTraS data products

    NASA Astrophysics Data System (ADS)

    Carpano, Stefania; Haberl, F.; De Luca, A.; Tiengo, A.: Israel, G.; Rodriguez, G.; Belfiore, A.; Rosen, S.; Read, A.; Wilms, J.; Kreikenbohm, A.; Law-Green, D.

    2015-09-01

    The EXTraS project (Exploring the X-ray Transient and variable Sky) is aimed at fullyexploring the serendipitous content of the XMM-Newton EPIC database in the timedomain. The project is funded within the EU/FP7-Cooperation Space framework and is carried out by a collaboration including INAF (Italy), IUSS (Italy), CNR/IMATI (Italy), University of Leicester (UK), MPE (Germany) and ECAP (Germany). The several tasks consist in characterise aperiodicvariability for all 3XMM sources, search for short-term periodic variability on hundreds of thousands sources, detect new transient sources that are missed by standard source detection and hence not belonging to the 3XMM catalogue, search for long term variability by measuring fluxes or upper limits for both pointed and slew observations, and finally perform multiwavelength characterisation andclassification. Screening and validation of the different products is essentially in order to reject flawed results, generated by the automatic pipelines. We present here the screening tool we developed in the form of a Graphical User Interface and our plans for a systematic screening of the different catalogues.

  17. X-ray reflection from cold white dwarfs in magnetic cataclysmic variables

    NASA Astrophysics Data System (ADS)

    Hayashi, Takayuki; Kitaguchi, Takao; Ishida, Manabu

    2018-02-01

    We model X-ray reflection from white dwarfs (WDs) in magnetic cataclysmic variables (mCVs) using a Monte Carlo simulation. A point source with a power-law spectrum or a realistic post-shock accretion column (PSAC) source irradiates a cool and spherical WD. The PSAC source emits thermal spectra of various temperatures stratified along the column according to the PSAC model. In the point-source simulation, we confirm the following: a source harder and nearer to the WD enhances the reflection; higher iron abundance enhances the equivalent widths (EWs) of fluorescent iron Kα1, 2 lines and their Compton shoulder, and increases the cut-off energy of a Compton hump; significant reflection appears from an area that is more than 90° apart from the position right under the point X-ray source because of the WD curvature. The PSAC simulation reveals the following: a more massive WD basically enhances the intensities of the fluorescent iron Kα1, 2 lines and the Compton hump, except for some specific accretion rate, because the more massive WD makes a hotter PSAC from which higher-energy X-rays are preferentially emitted; a larger specific accretion rate monotonically enhances the reflection because it makes a hotter and shorter PSAC; the intrinsic thermal component hardens by occultation of the cool base of the PSAC by the WD. We quantitatively estimate the influences of the parameters on the EWs and the Compton hump with both types of source. We also calculate X-ray modulation profiles brought about by the WD spin. These depend on the angles of the spin axis from the line of sight and from the PSAC, and on whether the two PSACs can be seen. The reflection spectral model and the modulation model involve the fluorescent lines and the Compton hump and can directly be compared to the data, which allows us to estimate these geometrical parameters with unprecedented accuracy.

  18. A Comparative Analysis of Vibrio cholerae Contamination in Point-of-Drinking and Source Water in a Low-Income Urban Community, Bangladesh

    PubMed Central

    Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.

    2018-01-01

    Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005

  19. Improving power to detect changes in blood miRNA expression by accounting for sources of variability in experimental designs.

    PubMed

    Daniels, Sarah I; Sillé, Fenna C M; Goldbaum, Audrey; Yee, Brenda; Key, Ellen F; Zhang, Luoping; Smith, Martyn T; Thomas, Reuben

    2014-12-01

    Blood miRNAs are a new promising area of disease research, but variability in miRNA measurements may limit detection of true-positive findings. Here, we measured sources of miRNA variability and determine whether repeated measures can improve power to detect fold-change differences between comparison groups. Blood from healthy volunteers (N = 12) was collected at three time points. The miRNAs were extracted by a method predetermined to give the highest miRNA yield. Nine different miRNAs were quantified using different qPCR assays and analyzed using mixed models to identify sources of variability. A larger number of miRNAs from a publicly available blood miRNA microarray dataset with repeated measures were used for a bootstrapping procedure to investigate effects of repeated measures on power to detect fold changes in miRNA expression for a theoretical case-control study. Technical variability in qPCR replicates was identified as a significant source of variability (P < 0.05) for all nine miRNAs tested. Variability was larger in the TaqMan qPCR assays (SD = 0.15-0.61) versus the qScript qPCR assays (SD = 0.08-0.14). Inter- and intraindividual and extraction variability also contributed significantly for two miRNAs. The bootstrapping procedure demonstrated that repeated measures (20%-50% of N) increased detection of a 2-fold change for approximately 10% to 45% more miRNAs. Statistical power to detect small fold changes in blood miRNAs can be improved by accounting for sources of variability using repeated measures and choosing appropriate methods to minimize variability in miRNA quantification. This study demonstrates the importance of including repeated measures in experimental designs for blood miRNA research. See all the articles in this CEBP Focus section, "Biomarkers, Biospecimens, and New Technologies in Molecular Epidemiology." ©2014 American Association for Cancer Research.

  20. DEVELOPING SEASONAL AMMONIA EMISSION ESTIMATES WITH AN INVERSE MODELING TECHNIQUE

    EPA Science Inventory

    Significant uncertainty exists in magnitude and variability of ammonia (NH3) emissions, which are needed for air quality modeling of aerosols and deposition of nitrogen compounds. Approximately 85% of NH3 emissions are estimated to come from agricultural non-point sources. We sus...

  1. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  2. Body temperature variability (Part 1): a review of the history of body temperature and its variability due to site selection, biological rhythms, fitness, and aging.

    PubMed

    Kelly, Greg

    2006-12-01

    Body temperature is a complex, non-linear data point, subject to many sources of internal and external variation. While these sources of variation significantly complicate interpretation of temperature data, disregarding knowledge in favor of oversimplifying complex issues would represent a significant departure from practicing evidence-based medicine. Part 1 of this review outlines the historical work of Wunderlich on temperature and the origins of the concept that a healthy normal temperature is 98.6 degrees F (37.0 degrees C). Wunderlich's findings and methodology are reviewed and his results are contrasted with findings from modern clinical thermometry. Endogenous sources of temperature variability, including variations caused by site of measurement, circadian, menstrual, and annual biological rhythms, fitness, and aging are discussed. Part 2 will review the effects of exogenous masking agents - external factors in the environment, diet, or lifestyle that can influence body temperature, as well as temperature findings in disease states.

  3. VizieR Online Data Catalog: XMM-Newton point-source catalogue of the SMC (Sturm+, 2013)

    NASA Astrophysics Data System (ADS)

    Sturm, R.; Haberl, F.; Pietsch, W.; Ballet, J.; Hatzidimitriou, D.; Buckley, D. A. H.; Coe, M.; Ehle, M.; Filipovic, M. D.; La Palombara, N.; Tiengo, A.

    2013-07-01

    The XMM-Newton survey of the Small Magellanic Cloud (SMC) yields a complete coverage of the bar and eastern wing in the 0.2-12.0keV band. This catalogue comprises 3053 unique X-ray point sources and sources with moderate extent that have been reduced from 5236 individual detections found in observations between April 2000 and April 2010. Sources have a median position uncertainty of 1.3" (1σ) and limiting fluxes down to ~1*10-14erg/s/cm2 in the 0.2-4.5keV band, corresponding to 5*1033erg/s for sources in the SMC. Sources have been classified using hardness ratios, X-ray variability, and their multi-wavelength properties. In addition to the main-field (5.58deg2) available outer fields have been included in the catalogue, yielding a total field area of 6.32deg2. X-ray sources with high extent (>40", e.g. supernova remnants and galaxy cluster) have been presented by Haberl et al. (2012, Cat. J/A+A/545/A128) (2 data files).

  4. A method for analyzing temporal patterns of variability of a time series from Poincare plots.

    PubMed

    Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E

    2012-07-01

    The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.

  5. Spectral Properties and Variability of BIS objects

    NASA Astrophysics Data System (ADS)

    Gaudenzi, S.; Nesci, R.; Rossi, C.; Sclavi, S.; Gigoyan, K. S.; Mickaelian, A. M.

    2017-10-01

    Through the analysis and interpretation of newly obtained and of literature data we have clarified the nature of poorly investigated IRAS point sources classified as late type stars, belonging to the Byurakan IRAS Stars catalog. From medium resolution spectroscopy of 95 stars we have strongly revised 47 spectral types and newly classified 31 sources. Nine stars are of G or K types, four are N carbon stars in the Asymptotic Giant Branch, the others being M-type stars. From literature and new photometric observations we have studied their variability behaviour. For the regular variables we determined distances, absolute magnitudes and mass loss rates. For the other stars we estimated the distances, ranging between 1.3 and 10 kpc with a median of 2.8 kpc from the galactic plane, indicating that BIS stars mostly belong to the halo population.

  6. Evaluation of a stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models are essential tools for environmental assessment of agricultural non-point source pollution. The automatic calibration of hydrologic models, though efficient, demands significant computational power, which can limit its application. The study objective was to investigate a cost e...

  7. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    NASA Astrophysics Data System (ADS)

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  8. TiO2 dye sensitized solar cell (DSSC): linear relationship of maximum power point and anthocyanin concentration

    NASA Astrophysics Data System (ADS)

    Ahmadian, Radin

    2010-09-01

    This study investigated the relationship of anthocyanin concentration from different organic fruit species and output voltage and current in a TiO2 dye-sensitized solar cell (DSSC) and hypothesized that fruits with greater anthocyanin concentration produce higher maximum power point (MPP) which would lead to higher current and voltage. Anthocyanin dye solution was made with crushing of a group of fresh fruits with different anthocyanin content in 2 mL of de-ionized water and filtration. Using these test fruit dyes, multiple DSSCs were assembled such that light enters through the TiO2 side of the cell. The full current-voltage (I-V) co-variations were measured using a 500 Ω potentiometer as a variable load. Point-by point current and voltage data pairs were measured at various incremental resistance values. The maximum power point (MPP) generated by the solar cell was defined as a dependent variable and the anthocyanin concentration in the fruit used in the DSSC as the independent variable. A regression model was used to investigate the linear relationship between study variables. Regression analysis showed a significant linear relationship between MPP and anthocyanin concentration with a p-value of 0.007. Fruits like blueberry and black raspberry with the highest anthocyanin content generated higher MPP. In a DSSC, a linear model may predict MPP based on the anthocyanin concentration. This model is the first step to find organic anthocyanin sources in the nature with the highest dye concentration to generate energy.

  9. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    USGS Publications Warehouse

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-01-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  10. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 2. Application to Owens Valley, California

    NASA Astrophysics Data System (ADS)

    Guymon, Gary L.; Yen, Chung-Cheng

    1990-07-01

    The applicability of a deterministic-probabilistic model for predicting water tables in southern Owens Valley, California, is evaluated. The model is based on a two-layer deterministic model that is cascaded with a two-point probability model. To reduce the potentially large number of uncertain variables in the deterministic model, lumping of uncertain variables was evaluated by sensitivity analysis to reduce the total number of uncertain variables to three variables: hydraulic conductivity, storage coefficient or specific yield, and source-sink function. Results demonstrate that lumping of uncertain parameters reduces computational effort while providing sufficient precision for the case studied. Simulated spatial coefficients of variation for water table temporal position in most of the basin is small, which suggests that deterministic models can predict water tables in these areas with good precision. However, in several important areas where pumping occurs or the geology is complex, the simulated spatial coefficients of variation are over estimated by the two-point probability method.

  11. [Multiple time scales analysis of spatial differentiation characteristics of non-point source nitrogen loss within watershed].

    PubMed

    Liu, Mei-bing; Chen, Xing-wei; Chen, Ying

    2015-07-01

    Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.

  12. AN ACCURACY ASSESSMENT OF MULTIPLE MID-ATLANTIC SUB-PIXEL IMPERVIOUS SURFACE MAPS

    EPA Science Inventory

    Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to integrate a number of conc...

  13. SUBPIXEL-SCALE RAINFALL VARIABILITY AND THE EFFECTS ON SEPARATION OF RADAR AND GAUGE RAINFALL ERRORS

    EPA Science Inventory

    One of the primary sources of the discrepancies between radar-based rainfall estimates and rain gauge measurements is the point-area difference, i.e., the intrinsic difference in the spatial dimensions of the rainfall fields that the respective data sets are meant to represent. ...

  14. Audible acoustics in high-shear wet granulation: application of frequency filtering.

    PubMed

    Hansuld, Erin M; Briens, Lauren; McCann, Joe A B; Sayani, Amyn

    2009-08-13

    Previous work has shown analysis of audible acoustic emissions from high-shear wet granulation has potential as a technique for end-point detection. In this research, audible acoustic emissions (AEs) from three different formulations were studied to further develop this technique as a process analytical technology. Condenser microphones were attached to three different locations on a PMA-10 high-shear granulator (air exhaust, bowl and motor) to target different sound sources. Size, flowability and tablet break load data was collected to support formulator end-point ranges and interpretation of AE analysis. Each formulation had a unique total power spectral density (PSD) profile that was sensitive to granule formation and end-point. Analyzing total PSD in 10 Hz segments identified profiles with reduced run variability and distinct maxima and minima suitable for routine granulation monitoring and end-point control. A partial least squares discriminant analysis method was developed to automate selection of key 10 Hz frequency groups using variable importance to projection. The results support use of frequency refinement as a way forward in the development of acoustic emission analysis for granulation monitoring and end-point control.

  15. The X-Ray Binary Population of the Nearby Dwarf Starburst Galaxy IC 10: Variable and Transient X-Ray Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laycock, Silas; Cappallo, Rigel; Williams, Benjamin F.

    We have monitored the Cassiopeia dwarf galaxy (IC 10) in a series of 10 Chandra ACIS-S observations to capture its variable and transient X-ray source population, which is expected to be dominated by High Mass X-ray Binaries (HMXBs). We present a sample of 21 X-ray sources that are variable between observations at the 3 σ level, from a catalog of 110 unique point sources. We find four transients (flux variability ratio greater than 10) and a further eight objects with ratios >5. The observations span the years 2003–2010 and reach a limiting luminosity of >10{sup 35} erg s{sup −1}, providingmore » sensitivity to X-ray binaries in IC 10 as well as flare stars in the foreground Milky Way. The nature of the variable sources is investigated from light curves, X-ray spectra, energy quantiles, and optical counterparts. The purpose of this study is to discover the composition of the X-ray binary population in a young starburst environment. IC 10 provides a sharp contrast in stellar population age (<10 My) when compared to the Magellanic Clouds (40–200 My) where most of the known HMXBs reside. We find 10 strong HMXB candidates, 2 probable background Active Galactic Nuclei, 4 foreground flare-stars or active binaries, and 5 not yet classifiable sources. Complete classification of the sample requires optical spectroscopy for radial velocity analysis and deeper X-ray observations to obtain higher S/N spectra and search for pulsations. A catalog and supporting data set are provided.« less

  16. Distribution and Fate of Tributyltin in the United States Marine Environment

    DTIC Science & Technology

    Tributyltin ( TBT ) has been measured in water in 12 of 15 harbors studied during U.S. Navy baseline surveys. The highest concentrations of TBT (some...no detectable (5 ng dm-3) TBT . TBT monitoring studies with increased detection limits (l ng dm-3) have documented a high degree of TBT variability...associated with tide, season and intermittent point source discharges. Although yacht harbors were shown to be the principal TBT source in most regions

  17. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  18. The gamma ray continuum spectrum from the galactic center disk and point sources

    NASA Technical Reports Server (NTRS)

    Gehrels, Neil; Tueller, Jack

    1992-01-01

    A light curve of gamma-ray continuum emission from point sources in the galactic center region is generated from balloon and satellite observations made over the past 25 years. The emphasis is on the wide field-of-view instruments which measure the combined flux from all sources within approximately 20 degrees of the center. These data have not been previously used for point-source analyses because of the unknown contribution from diffuse disk emission. In this study, the galactic disk component is estimated from observations made by the Gamma Ray Imaging Spectrometer (GRIS) instrument in Oct. 1988. Surprisingly, there are several times during the past 25 years when all gamma-ray sources (at 100 keV) within about 20 degrees of the galactic center are turned off or are in low emission states. This implies that the sources are all variable and few in number. The continuum gamma-ray emission below approximately 150 keV from the black hole candidate 1E1740.7-2942 is seen to turn off in May 1989 on a time scale of less than two weeks, significantly shorter than ever seen before. With the continuum below 150 keV turned off, the spectral shape derived from the HEXAGONE observation on 22 May 1989 is very peculiar with a peak near 200 keV. This source was probably in its normal state for more than half of all observations since the mid-1960's. There are only two observations (in 1977 and 1979) for which the sum flux from the point sources in the region significantly exceeds that from 1E1740.7-2942 in its normal state.

  19. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    PubMed

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  20. Probing the X-Ray Binary Populations of the Ring Galaxy NGC 1291

    NASA Technical Reports Server (NTRS)

    Luo, B.; Fabbiano, G.; Fragos, T.; Kim, D. W.; Belczynski, K.; Brassington, N. J.; Pellegrini, S.; Tzanavaris, P.; Wang, J.; Zezas, A.

    2012-01-01

    We present Chandra studies of the X-ray binary (XRB) populations in the bulge and ring regions of the ring galaxy NGC 1291. We detect 169 X-ray point sources in the galaxy, 75 in the bulge and 71 in the ring, utilizing the four available Chandra observations totaling an effective exposure of 179 ks. We report photometric properties of these sources in a point-source catalog. There are approx. 40% of the bulge sources and approx. 25% of the ring sources showing > 3(sigma) long-term variability in their X-ray count rate. The X-ray colors suggest that a significant fraction of the bulge (approx. 75%) and ring (approx. 65%) sources are likely low-mass X-ray binaries (LMXBs). The spectra of the nuclear source indicate that it is a low-luminosity AGN with moderate obscuration; spectral variability is observed between individual observations. We construct 0.3-8.0 keV X-ray luminosity functions (XLFs) for the bulge and ring XRB populations, taking into account the detection incompleteness and background AGN contamination. We reach 90% completeness limits of approx.1.5 x 10(exp 37) and approx. 2.2 x 10(exp 37) erg/s for the bulge and ring populations, respectively. Both XLFs can be fit with a broken power-law model, and the shapes are consistent with those expected for populations dominated by LMXBs. We perform detailed population synthesis modeling of the XRB populations in NGC 1291 , which suggests that the observed combined XLF is dominated by aD old LMXB population. We compare the bulge and ring XRB populations, and argue that the ring XRBs are associated with a younger stellar population than the bulge sources, based on the relative over-density of X-ray sources in the ring, the generally harder X-ray color of the ring sources, the overabundance of luminous sources in the combined XLF, and the flatter shape of the ring XLF.

  1. On the Nature of Bright Infrared Sources in the Small Magellanic Cloud: Interpreting MSX through the Lens of Spitzer

    NASA Astrophysics Data System (ADS)

    Kraemer, Kathleen E.; Sloan, G. C.

    2015-01-01

    We compare infrared observations of the Small Magellanic Cloud (SMC) by the Midcourse Space Experiment (MSX) and the Spitzer Space Telescope to better understand what components of a metal-poor galaxy dominate radiative processes in the infrared. The SMC, at a distance of ~60 kpc and with a metallicity of ~0.1-0.2 solar, can serve as a nearby proxy for metal-poor galaxies at high redshift. The MSX Point Source Catalog contains 243 objects in the SMC that were detected at 8.3 microns, the most sensitive MSX band. Multi-epoch, multi-band mapping with Spitzer, supplemented with observations from the Two-Micron All-Sky Survey (2MASS) and the Wide-field Infrared Survey Explorer (WISE), provides variability information, and, together with spectra from Spitzer for ~15% of the sample, enables us to determine what these luminous sources are. How many remain simple point sources? What fraction break up into multiple stars? Which are star forming regions, with both bright diffuse emission and point sources? How do evolved stars and stellar remnants contribute at these wavelengths? What role do young stellar objects and HII regions play? Answering these questions sets the stage for understanding what we will see with the James Webb Space Telescope (JWST).

  2. Methane Seeps in the Gulf of Mexico: repeat acoustic surveying shows highly temporally and spatially variable venting

    NASA Astrophysics Data System (ADS)

    Beaumont, B. C.; Raineault, N.

    2016-02-01

    Scientists have recognized that natural seeps account for a large amount of methane emissions. Despite their widespread occurrence in areas like the Gulf of Mexico, little is known about the temporal variability and site-scale spatial variability of venting over time. We used repeat acoustic surveys to compare multiple days of seep activity and determine the changes in the locus of methane emission and plume height. The Sleeping Dragon site was surveyed with an EM302 multibeam sonar on three consecutive days in 2014 and 4 days within one week in 2015. The data revealed three distinctive plume regions. The locus of venting varied by 10-60 meters at each site. The plume that exhibited the least spatial variability in venting, was also the most temporally variable. This seep was present in one-third of survey dates in 2014 and three quarters of survey dates in 2015, showing high day-to-day variability. The plume height was very consistent for this plume, whereas the other plumes were more consistent temporally, but varied in maximum plume height detection by 25-85 m. The single locus of emission at the site that had high day-to-day variability may be due to a single conduit for methane release, which is sometimes closed off by carbonate or clathrate hydrate formation. In addition to day-to-day temporal variability, the locus of emission at one site was observed to shift from a point-source in 2014 to a diffuse source in 2015 at a nearby location. ROV observations showed that one of the seep sites that closed off temporarily, experienced an explosive breakthrough of gas, releasing confined methane and blowing out rock. The mechanism that causes on/off behavior of certain plumes, combined with the spatial variability of the locus of methane release shown in this study may point to carbonate or hydrate formation in the seep plumbing system and should be further investigated.

  3. Analytic Expressions for the Gravity Gradient Tensor of 3D Prisms with Depth-Dependent Density

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Liu, Jie; Zhang, Jianzhong; Feng, Zhibing

    2017-12-01

    Variable-density sources have been paid more attention in gravity modeling. We conduct the computation of gravity gradient tensor of given mass sources with variable density in this paper. 3D rectangular prisms, as simple building blocks, can be used to approximate well 3D irregular-shaped sources. A polynomial function of depth can represent flexibly the complicated density variations in each prism. Hence, we derive the analytic expressions in closed form for computing all components of the gravity gradient tensor due to a 3D right rectangular prism with an arbitrary-order polynomial density function of depth. The singularity of the expressions is analyzed. The singular points distribute at the corners of the prism or on some of the lines through the edges of the prism in the lower semi-space containing the prism. The expressions are validated, and their numerical stability is also evaluated through numerical tests. The numerical examples with variable-density prism and basin models show that the expressions within their range of numerical stability are superior in computational accuracy and efficiency to the common solution that sums up the effects of a collection of uniform subprisms, and provide an effective method for computing gravity gradient tensor of 3D irregular-shaped sources with complicated density variation. In addition, the tensor computed with variable density is different in magnitude from that with constant density. It demonstrates the importance of the gravity gradient tensor modeling with variable density.

  4. NuSTAR Hard X-Ray Survey of the Galactic Center Region. II. X-Ray Point Sources

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Mori, Kaya; Hailey, Charles J.; Nynka, Melania; Zhang, Shou; Gotthelf, Eric; Fornasini, Francesca M.; Krivonos, Roman; Bauer, Franz; Perez, Kerstin; hide

    2016-01-01

    We present the first survey results of hard X-ray point sources in the Galactic Center (GC) region by NuSTAR. We have discovered 70 hard (3-79 keV) X-ray point sources in a 0.6 deg(sup 2) region around Sgr?A* with a total exposure of 1.7 Ms, and 7 sources in the Sgr B2 field with 300 ks. We identify clear Chandra counterparts for 58 NuSTAR sources and assign candidate counterparts for the remaining 19. The NuSTAR survey reaches X-ray luminosities of approx. 4× and approx. 8 ×10(exp 32) erg/s at the GC (8 kpc) in the 3-10 and 10-40 keV bands, respectively. The source list includes three persistent luminous X-ray binaries (XBs) and the likely run-away pulsar called the Cannonball. New source-detection significance maps reveal a cluster of hard (>10 keV) X-ray sources near the Sgr A diffuse complex with no clear soft X-ray counterparts. The severe extinction observed in the Chandra spectra indicates that all the NuSTAR sources are in the central bulge or are of extragalactic origin. Spectral analysis of relatively bright NuSTAR sources suggests that magnetic cataclysmic variables constitute a large fraction (>40%-60%). Both spectral analysis and logN-logS distributions of the NuSTAR sources indicate that the X-ray spectra of the NuSTAR sources should have kT > 20 keV on average for a single temperature thermal plasma model or an average photon index of Lambda = 1.5-2 for a power-law model. These findings suggest that the GC X-ray source population may contain a larger fraction of XBs with high plasma temperatures than the field population.

  5. Oil spill contamination probability in the southeastern Levantine basin.

    PubMed

    Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam

    2015-02-15

    Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Copernicus observations of a number of galactic X-ray sources

    NASA Technical Reports Server (NTRS)

    Culhane, J. L.; Mason, K. O.; Sanford, P. W.; White, N. E.

    1976-01-01

    The Copernicus satellite was launched on 21 August 1972. The main experiment on board is the University of Princeton UV telescope. In addition a cosmic X-ray package of somewhat modest aperture was provided by the Mullard Space Science Laboratory (MSSL) of University College London. Following a brief description of the instrument, a list of galactic sources observed during the year is presented. Although the X-ray detection aperture is small, the ability to point the satellite for long periods of time with high accuracy makes Copernicus an ideal vehicle for the study of variable sources.

  7. College Students' Misconceptions of Environmental Issues Related to Global Warming.

    ERIC Educational Resources Information Center

    Groves, Fred H.; Pugh, Ava F.

    Students are currently exposed to world environmental problems--including global warming and the greenhouse effect--in science classes at various points during their K-12 and college experience. However, the amount and depth of explosure to these issues can be quite variable. Students are also exposed to sources of misinformation leading to…

  8. The Application of Function Points to Predict Source Lines of Code for Software Development

    DTIC Science & Technology

    1992-09-01

    there are some disadvantages. Software estimating tools are expensive. A single tool may cost more than $15,000 due to the high market value of the...term and Lang variables simultaneously onlN added marginal improvements over models with these terms included singularly. Using all the available

  9. CHANDRA ACIS SURVEY OF X-RAY POINT SOURCES: THE SOURCE CATALOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Song; Liu, Jifeng; Qiu, Yanli

    The Chandra archival data is a valuable resource for various studies on different X-ray astronomy topics. In this paper, we utilize this wealth of information and present a uniformly processed data set, which can be used to address a wide range of scientific questions. The data analysis procedures are applied to 10,029 Advanced CCD Imaging Spectrometer observations, which produces 363,530 source detections belonging to 217,828 distinct X-ray sources. This number is twice the size of the Chandra Source Catalog (Version 1.1). The catalogs in this paper provide abundant estimates of the detected X-ray source properties, including source positions, counts, colors,more » fluxes, luminosities, variability statistics, etc. Cross-correlation of these objects with galaxies shows that 17,828 sources are located within the D {sub 25} isophotes of 1110 galaxies, and 7504 sources are located between the D {sub 25} and 2 D {sub 25} isophotes of 910 galaxies. Contamination analysis with the log N –log S relation indicates that 51.3% of objects within 2 D {sub 25} isophotes are truly relevant to galaxies, and the “net” source fraction increases to 58.9%, 67.3%, and 69.1% for sources with luminosities above 10{sup 37}, 10{sup 38}, and 10{sup 39} erg s{sup −1}, respectively. Among the possible scientific uses of this catalog, we discuss the possibility of studying intra-observation variability, inter-observation variability, and supersoft sources (SSSs). About 17,092 detected sources above 10 counts are classified as variable in individual observation with the Kolmogorov–Smirnov (K–S) criterion ( P {sub K–S} < 0.01). There are 99,647 sources observed more than once and 11,843 sources observed 10 times or more, offering us a wealth of data with which to explore the long-term variability. There are 1638 individual objects (∼2350 detections) classified as SSSs. As a quite interesting subclass, detailed studies on X-ray spectra and optical spectroscopic follow-up are needed to categorize these SSSs and pinpoint their properties. In addition, this survey can enable a wide range of statistical studies, such as X-ray activity in different types of stars, X-ray luminosity functions in different types of galaxies, and multi-wavelength identification and classification of different X-ray populations.« less

  10. CCD photometry of 1218+304 1219+28 and 1727+50: Point sources, associated nebulosity and broadband spectra

    NASA Technical Reports Server (NTRS)

    Weistrop, D.; Shaffer, D. B.; Mushotzky, R. F.; Reitsma, H. J.; Smith, B. A.

    1981-01-01

    Visual and far red surface photometry were obtained of two X-ray emitting BL Lacertae objects, 1218+304 (2A1219+305) and 1727+50 (Izw 187), as well as the highly variable object 1219+28 (ON 231, W Com). The intensity distribution for 1727+50 can be modeled using a central point source plus a de Vaucouleurs intensity law for an underlying galaxy. The broad band spectral energy distribution so derived is consistent with what is expected for an elliptical galaxy. The spectral index of the point source is alpha = 0.97. Additional VLBI and X-ray data are also reported for 1727+50. There is nebulosity associated with the recently discovered object 1218+304. No nebulosity is found associated with 1219+28. A comparison of the results with observations at X-ray and radio frequencies suggests that all the emission from 1727+50 and 1218+304 can be interpreted as due solely to direct synchrotron emission. If this is the case, the data further imply the existence of relativistic motion effects and continuous particle injection.

  11. Chandra Deep X-ray Observation of a Typical Galactic Plane Region and Near-Infrared Identification

    NASA Technical Reports Server (NTRS)

    Ebisawa, K.; Tsujimoto, M.; Paizis, A.; Hamaguichi, K.; Bamba, A.; Cutri, R.; Kaneda, H.; Maeda, Y.; Sato, G.; Senda, A.

    2004-01-01

    Using the Chandra Advanced CCD Imaging Spectrometer Imaging array (ACIS-I), we have carried out a deep hard X-ray observation of the Galactic plane region at (l,b) approx. (28.5 deg,0.0 deg), where no discrete X-ray source has been reported previously. We have detected 274 new point X-ray sources (4 sigma confidence) as well as strong Galactic diffuse emission within two partidly overlapping ACIS-I fields (approx. 250 sq arcmin in total). The point source sensitivity was approx. 3 x 10(exp -15)ergs/s/sq cm in the hard X-ray band (2-10 keV and approx. 2 x 10(exp -16) ergs/s/sq cm in the soft band (0.5-2 keV). Sum of all the detected point source fluxes account for only approx. 10 % of the total X-ray fluxes in the field of view. In order to explain the total X-ray fluxes by a superposition of fainter point sources, an extremely rapid increase of the source population is required below our sensitivity limit, which is hardly reconciled with any source distribution in the Galactic plane. Therefore, we conclude that X-ray emission from the Galactic plane has truly diffuse origin. Only 26 point sources were detected both in the soft and hard bands, indicating that there are two distinct classes of the X-ray sources distinguished by the spectral hardness ratio. Surface number density of the hard sources is only slightly higher than observed at the high Galactic latitude regions, strongly suggesting that majority of the hard X-ray sources are active galaxies seen through the Galactic plane. Following the Chandra observation, we have performed a near-infrared (NIR) survey with SOFI at ESO/NTT to identify these new X-ray sources. Since the Galactic plane is opaque in NIR, we did not see the background extragalactic sources in NIR. In fact, only 22 % of the hard sources had NIR counterparts which are most likely to be Galactic origin. Composite X-ray energy spectrum of those hard X-ray sources having NIR counterparts exhibits a narrow approx. 6.7 keV iron emission line, which is a signature of Galactic quiescent cataclysmic variables (CVs).

  12. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tie, S. S.; Martini, P.; Mudd, D.

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  13. A Study of Quasar Selection in the Supernova Fields of the Dark Energy Survey

    DOE PAGES

    Tie, S. S.; Martini, P.; Mudd, D.; ...

    2017-02-15

    In this paper, we present a study of quasar selection using the supernova fields of the Dark Energy Survey (DES). We used a quasar catalog from an overlapping portion of the SDSS Stripe 82 region to quantify the completeness and efficiency of selection methods involving color, probabilistic modeling, variability, and combinations of color/probabilistic modeling with variability. In all cases, we considered only objects that appear as point sources in the DES images. We examine color selection methods based on the Wide-field Infrared Survey Explorer (WISE) mid-IR W1-W2 color, a mixture of WISE and DES colors (g - i and i-W1),more » and a mixture of Vista Hemisphere Survey and DES colors (g - i and i - K). For probabilistic quasar selection, we used XDQSO, an algorithm that employs an empirical multi-wavelength flux model of quasars to assign quasar probabilities. Our variability selection uses the multi-band χ 2-probability that sources are constant in the DES Year 1 griz-band light curves. The completeness and efficiency are calculated relative to an underlying sample of point sources that are detected in the required selection bands and pass our data quality and photometric error cuts. We conduct our analyses at two magnitude limits, i < 19.8 mag and i < 22 mag. For the subset of sources with W1 and W2 detections, the W1-W2 color or XDQSOz method combined with variability gives the highest completenesses of >85% for both i-band magnitude limits and efficiencies of >80% to the bright limit and >60% to the faint limit; however, the giW1 and giW1+variability methods give the highest quasar surface densities. The XDQSOz method and combinations of W1W2/giW1/XDQSOz with variability are among the better selection methods when both high completeness and high efficiency are desired. We also present the OzDES Quasar Catalog of 1263 spectroscopically confirmed quasars from three years of OzDES observation in the 30 deg 2 of the DES supernova fields. Finally, the catalog includes quasars with redshifts up to z ~ 4 and brighter than i = 22 mag, although the catalog is not complete up to this magnitude limit.« less

  14. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  15. Run-up Variability due to Source Effects

    NASA Astrophysics Data System (ADS)

    Del Giudice, Tania; Zolezzi, Francesca; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.

    2010-05-01

    This paper investigates the variability of tsunami run-up at a specific location due to uncertainty in earthquake source parameters. It is important to quantify this 'inter-event' variability for probabilistic assessments of tsunami hazard. In principal, this aspect of variability could be studied by comparing field observations at a single location from a number of tsunamigenic events caused by the same source. As such an extensive dataset does not exist, we decided to study the inter-event variability through numerical modelling. We attempt to answer the question 'What is the potential variability of tsunami wave run-up at a specific site, for a given magnitude earthquake occurring at a known location'. The uncertainty is expected to arise from the lack of knowledge regarding the specific details of the fault rupture 'source' parameters. The following steps were followed: the statistical distributions of the main earthquake source parameters affecting the tsunami height were established by studying fault plane solutions of known earthquakes; a case study based on a possible tsunami impact on Egypt coast has been set up and simulated, varying the geometrical parameters of the source; simulation results have been analyzed deriving relationships between run-up height and source parameters; using the derived relationships a Monte Carlo simulation has been performed in order to create the necessary dataset to investigate the inter-event variability of the run-up height along the coast; the inter-event variability of the run-up height along the coast has been investigated. Given the distribution of source parameters and their variability, we studied how this variability propagates to the run-up height, using the Cornell 'Multi-grid coupled Tsunami Model' (COMCOT). The case study was based on the large thrust faulting offshore the south-western Greek coast, thought to have been responsible for the infamous 1303 tsunami. Numerical modelling of the event was used to assess the impact on the North African coast. The effects of uncertainty in fault parameters were assessed by perturbing the base model, and observing variation on wave height along the coast. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7 E and 33.8 E. To assess the effects of fault parameters uncertainty, input model parameters have been varied and effects on run-up have been analyzed. The simulations show that for a given point there are linear relationships between run-up and both fault dislocation and rupture length. A superposition analysis shows that a linear combination of the effects of the different source parameters (evaluated results) leads to a good approximation of the simulated results. This relationship is then used as the basis for a Monte Carlo simulation. The Monte Carlo simulation was performed for 1600 scenarios at each of the 4020 points along the coast. The coefficient of variation (the ratio between standard deviation of the results and the average of the run-up heights along the coast) is comprised between 0.14 and 3.11 with an average value along the coast equal to 0.67. The coefficient of variation of normalized run-up has been compared with the standard deviation of spectral acceleration attenuation laws used for probabilistic seismic hazard assessment studies. These values have a similar meaning, and the uncertainty in the two cases is similar. The 'rule of thumb' relationship between mean and sigma can be expressed as follows: ?+ σ ≈ 2?. The implication is that the uncertainty in run-up estimation should give a range of values within approximately two times the average. This uncertainty should be considered in tsunami hazard analysis, such as inundation and risk maps, evacuation plans and the other related steps.

  16. Combining stable isotopes with contamination indicators: A method for improved investigation of nitrate sources and dynamics in aquifers with mixed nitrogen inputs.

    PubMed

    Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E

    2017-11-01

    Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s  = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s  > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Evidence for Legacy Contamination of Nitrate in Groundwater of North Carolina Using Monitoring and Private Well Data Models

    NASA Astrophysics Data System (ADS)

    Messier, K. P.; Kane, E.; Bolich, R.; Serre, M. L.

    2014-12-01

    Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. Legacy contamination, or past releases of NO3-, is thought to be impacting current groundwater and surface water of North Carolina. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure known as constrained forward nonlinear regression and hyperparameter optimization (CFN-RHO) is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is then used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. The major finding regarding legacy sources NO3- in this study is that the LUR-BME models show the geographical extent of low-level contamination of deeper drinking-water aquifers is beyond that of the shallower monitoring well. Groundwater NO3- in monitoring wells is highly variable with many areas predicted above the current Environmental Protection Agency standard of 10 mg/L. Contrarily, the private well results depict widespread, low-level NO3-concentrations. This evidence supports that in addition to downward transport, there is also a significant outward transport of groundwater NO3- in the drinking water aquifer to areas outside the range of sources. Results indicate that the deeper aquifers are potentially acting as a reservoir that is not only deeper, but also covers a larger geographical area, than the reservoir formed by the shallow aquifers. Results are of interest to agencies that regulate surface water and drinking water sources impacted by the effects of legacy NO3- sources. Additionally, the results can provide guidance on factors affecting the point-level variability of groundwater NO3- and areas where monitoring is needed to reduce uncertainty. Lastly, LUR-BME predictions can be integrated into surface water models for more accurate management of non-point sources of nitrogen.

  18. State-Level Point-of-Sale Tobacco News Coverage and Policy Progression Over a 2-Year Period.

    PubMed

    Myers, Allison E; Southwell, Brian G; Ribisl, Kurt M; Moreland-Russell, Sarah; Bowling, J Michael; Lytle, Leslie A

    2018-01-01

    Mass media content may play an important role in policy change. However, the empirical relationship between media advocacy efforts and tobacco control policy success has rarely been studied. We examined the extent to which newspaper content characteristics (volume, slant, frame, source, use of evidence, and degree of localization) that have been identified as important in past descriptive studies were associated with policy progression over a 2-year period in the context of point-of-sale (POS) tobacco control. We used regression analyses to test the relationships between newspaper content and policy progression from 2012 to 2014. The dependent variable was the level of implementation of state-level POS tobacco control policies at Time 2. Independent variables were newspaper article characteristics (volume, slant, frame, source, use of evidence, and degree of localization) and were collected via content analysis of the articles. State-level policy environment contextual variables were examined as confounders. Positive, significant bivariate relationships exist between characteristics of news content (e.g., high overall volume, public health source present, local quote and local angle present, and pro-tobacco control slant present) and Time 2 POS score. However, in a multivariate model controlling for other factors, significant relationships did not hold. Newspaper coverage can be a marker of POS policy progression. Whether media can influence policy implementation remains an important question. Future work should continue to tease out and confirm the unique characteristics of media content that are most associated with subsequent policy progression, in order to inform media advocacy efforts.

  19. Point count length and detection of forest neotropical migrant birds

    USGS Publications Warehouse

    Dawson, D.K.; Smith, D.R.; Robbins, C.S.; Ralph, C. John; Sauer, John R.; Droege, Sam

    1995-01-01

    Comparisons of bird abundances among years or among habitats assume that the rates at which birds are detected and counted are constant within species. We use point count data collected in forests of the Mid-Atlantic states to estimate detection probabilities for Neotropical migrant bird species as a function of count length. For some species, significant differences existed among years or observers in both the probability of detecting the species and in the rate at which individuals are counted. We demonstrate the consequence that variability in species' detection probabilities can have on estimates of population change, and discuss ways for reducing this source of bias in point count studies.

  20. Outdoor air pollution in close proximity to a continuous point source

    NASA Astrophysics Data System (ADS)

    Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul

    Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.

  1. CONFIRMING THE RESULTS: AN ACCURACY ASSESSMENT OF REMOTE PRODUCTS, AN EXAMPLE COMPARING MULTIPLE MID-ATLANTIC SUB-PIXEL IMPERVIOUS SURFACE MAPS

    EPA Science Inventory

    Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to intcgrate a number of concur...

  2. General constraints on sampling wildlife on FIA plots

    USGS Publications Warehouse

    Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.

    2005-01-01

    This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.

  3. A tale of four surveys:What have we learned about the variable sky?

    NASA Astrophysics Data System (ADS)

    Howell, S. B.

    2008-03-01

    Four tales concerning a set of photometric imaging surveys are spun. The reader is lead through a brief description of each survey and major results are presented. The four surveys are summarized in a few simple "rules": 1) The fraction of point sources that are variable with respect to those that are found to be constant, increases as a power law as the photometric precision of the survey improves, and 2) This fact can be simply formulated as a power law function granting the user a predictive power.

  4. Calculated and measured brachytherapy dosimetry parameters in water for the Xoft Axxent X-Ray Source: an electronic brachytherapy source.

    PubMed

    Rivard, Mark J; Davis, Stephen D; DeWerd, Larry A; Rusch, Thomas W; Axelrod, Steve

    2006-11-01

    A new x-ray source, the model S700 Axxent X-Ray Source (Source), has been developed by Xoft Inc. for electronic brachytherapy. Unlike brachytherapy sources containing radionuclides, this Source may be turned on and off at will and may be operated at variable currents and voltages to change the dose rate and penetration properties. The in-water dosimetry parameters for this electronic brachytherapy source have been determined from measurements and calculations at 40, 45, and 50 kV settings. Monte Carlo simulations of radiation transport utilized the MCNP5 code and the EPDL97-based mcplib04 cross-section library. Inter-tube consistency was assessed for 20 different Sources, measured with a PTW 34013 ionization chamber. As the Source is intended to be used for a maximum of ten treatment fractions, tube stability was also assessed. Photon spectra were measured using a high-purity germanium (HPGe) detector, and calculated using MCNP. Parameters used in the two-dimensional (2D) brachytherapy dosimetry formalism were determined. While the Source was characterized as a point due to the small anode size, < 1 mm, use of the one-dimensional (1D) brachytherapy dosimetry formalism is not recommended due to polar anisotropy. Consequently, 1D brachytherapy dosimetry parameters were not sought. Calculated point-source model radial dose functions at gP(5) were 0.20, 0.24, and 0.29 for the 40, 45, and 50 kV voltage settings, respectively. For 1

  5. The influence of biological and technical factors on quantitative analysis of amyloid PET: Points to consider and recommendations for controlling variability in longitudinal data.

    PubMed

    Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William

    2015-09-01

    In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Uncertainty Quantification of Water Quality in Tamsui River in Taiwan

    NASA Astrophysics Data System (ADS)

    Kao, D.; Tsai, C.

    2017-12-01

    In Taiwan, modeling of non-point source pollution is unavoidably associated with uncertainty. The main purpose of this research is to better understand water contamination in the metropolitan Taipei area, and also to provide a new analysis method for government or companies to establish related control and design measures. In this research, three methods are utilized to carry out the uncertainty analysis step by step with Mike 21, which is widely used for hydro-dynamics and water quality modeling, and the study area is focused on Tamsui river watershed. First, a sensitivity analysis is conducted which can be used to rank the order of influential parameters and variables such as Dissolved Oxygen, Nitrate, Ammonia and Phosphorous. Then we use the First-order error method (FOEA) to determine the number of parameters that could significantly affect the variability of simulation results. Finally, a state-of-the-art method for uncertainty analysis called the Perturbance moment method (PMM) is applied in this research, which is more efficient than the Monte-Carlo simulation (MCS). For MCS, the calculations may become cumbersome when involving multiple uncertain parameters and variables. For PMM, three representative points are used for each random variable, and the statistical moments (e.g., mean value, standard deviation) for the output can be presented by the representative points and perturbance moments based on the parallel axis theorem. With the assumption of the independent parameters and variables, calculation time is significantly reduced for PMM as opposed to MCS for a comparable modeling accuracy.

  7. Lessons Learned from OMI Observations of Point Source SO2 Pollution

    NASA Technical Reports Server (NTRS)

    Krotkov, N.; Fioletov, V.; McLinden, Chris

    2011-01-01

    The Ozone Monitoring Instrument (OMI) on NASA Aura satellite makes global daily measurements of the total column of sulfur dioxide (SO2), a short-lived trace gas produced by fossil fuel combustion, smelting, and volcanoes. Although anthropogenic SO2 signals may not be detectable in a single OMI pixel, it is possible to see the source and determine its exact location by averaging a large number of individual measurements. We describe new techniques for spatial and temporal averaging that have been applied to the OMI SO2 data to determine the spatial distributions or "fingerprints" of SO2 burdens from top 100 pollution sources in North America. The technique requires averaging of several years of OMI daily measurements to observe SO2 pollution from typical anthropogenic sources. We found that the largest point sources of SO2 in the U.S. produce elevated SO2 values over a relatively small area - within 20-30 km radius. Therefore, one needs higher than OMI spatial resolution to monitor typical SO2 sources. TROPOMI instrument on the ESA Sentinel 5 precursor mission will have improved ground resolution (approximately 7 km at nadir), but is limited to once a day measurement. A pointable geostationary UVB spectrometer with variable spatial resolution and flexible sampling frequency could potentially achieve the goal of daily monitoring of SO2 point sources and resolve downwind plumes. This concept of taking the measurements at high frequency to enhance weak signals needs to be demonstrated with a GEOCAPE precursor mission before 2020, which will help formulating GEOCAPE measurement requirements.

  8. IRAS variables as galactic structure tracers - Classification of the bright variables

    NASA Technical Reports Server (NTRS)

    Allen, L. E.; Kleinmann, S. G.; Weinberg, M. D.

    1993-01-01

    The characteristics of the 'bright infrared variables' (BIRVs), a sample consisting of the 300 brightest stars in the IRAS Point Source Catalog with IRAS variability index VAR of 98 or greater, are investigated with the purpose of establishing which of IRAS variables are AGB stars (e.g., oxygen-rich Miras and carbon stars, as was assumed by Weinberg (1992)). Results of the analysis of optical, infrared, and microwave spectroscopy of these stars indicate that, out of 88 stars in the BIRV sample identified with cataloged variables, 86 can be classified as Miras. Results of a similar analysis performed for a color-selected sample of stars, using the color limits employed by Habing (1988) to select AGB stars, showed that, out of 52 percent of classified stars, 38 percent are non-AGB stars, including H II regions, planetary nebulae, supergiants, and young stellar objects, indicating that studies using color-selected samples are subject to misinterpretation.

  9. Spatial and temporal variations of loads and sources of total and dissolved Phosphorus in a set of rivers (Western France).

    NASA Astrophysics Data System (ADS)

    Legeay, Pierre-Louis; Moatar, Florentina; Gascuel-Odoux, Chantal; Gruau, Gérard

    2015-04-01

    In intensive agricultural regions with important livestock farming, long-term land application of Phosphorus (P) both as chemical fertilizer and animal wastes, have resulted in elevated P contents in soils. Since we know that high P concentrations in rivers is of major concern, few studies have been done at to assess the spatiotemporal variability of P loads in rivers and apportionment of point and nonpoint source in total loads. Here we focus on Brittany (Western France) where even though P is a great issue in terms of human and drinking water safety (cyano-toxins), environmental protection and economic costs for Brittany with regards to the periodic proliferations of cyanobacteria that occur every year in this region, no regional-scale systematic study has been carried out so far. We selected a set of small rivers (stream order 3-5) with homogeneous agriculture and granitic catchment. By gathering data from three water quality monitoring networks, covering more than 100 measurements stations, we provide a regional-scale quantification of the spatiotemporal variability of dissolved P (DP) and total P (TP) interannual loads from 1992 to 2012. Build on mean P load in low flows and statistical significance tests, we developed a new indicator, called 'low flow P load' (LFP-load), which allows us to determine the importance of domestic and industrial P sources in total P load and to assess their spatiotemporal variability compared to agricultural sources. The calculation and the map representation of DP and TP interannual load variations allow identification of the greatest and lowest P contributory catchments over the study period and the way P loads of Brittany rivers have evolved through time. Both mean DP and TP loads have been divided by more than two over the last 20 years. Mean LFDP-load decreased by more than 60% and mean LFTP-load by more than 45% on average over the same period showing that this marked temporal decrease in total load is largely due to the decrease of domestic and industrial P effluents. A global shift in P inputs apportionment to freshwaters thus occurred in Brittany since 20 years as agricultural nonpoint sources now contribute a greater portion of inputs showing the efficiency of the recent control of point sources by enhancement of water treatment plant and removal of phosphates in detergents. The spatialized P loads provided by this study could give a basis for a better understanding of the factors that drives the P transfers in Brittany soils and hotspots of P emissions while the LFP-load indicator can be a tool to assess effects of point-source P mitigation plans.

  10. Uncertainty propagation for SPECT/CT-based renal dosimetry in 177Lu peptide receptor radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Gustafsson, Johan; Brolin, Gustav; Cox, Maurice; Ljungberg, Michael; Johansson, Lena; Sjögreen Gleisner, Katarina

    2015-11-01

    A computer model of a patient-specific clinical 177Lu-DOTATATE therapy dosimetry system is constructed and used for investigating the variability of renal absorbed dose and biologically effective dose (BED) estimates. As patient models, three anthropomorphic computer phantoms coupled to a pharmacokinetic model of 177Lu-DOTATATE are used. Aspects included in the dosimetry-process model are the gamma-camera calibration via measurement of the system sensitivity, selection of imaging time points, generation of mass-density maps from CT, SPECT imaging, volume-of-interest delineation, calculation of absorbed-dose rate via a combination of local energy deposition for electrons and Monte Carlo simulations of photons, curve fitting and integration to absorbed dose and BED. By introducing variabilities in these steps the combined uncertainty in the output quantity is determined. The importance of different sources of uncertainty is assessed by observing the decrease in standard deviation when removing a particular source. The obtained absorbed dose and BED standard deviations are approximately 6% and slightly higher if considering the root mean square error. The most important sources of variability are the compensation for partial volume effects via a recovery coefficient and the gamma-camera calibration via the system sensitivity.

  11. How long bones grow children: Mechanistic paths to variation in human height growth.

    PubMed

    Lampl, Michelle; Schoen, Meriah

    2017-03-01

    Eveleth and Tanner's descriptive documentation of worldwide variability in human growth provided evidence of the interaction between genetics and environment during development that has been foundational to the science of human growth. There remains a need, however, to describe the mechanistic foundations of variability in human height growth patterns. A review of research documenting cellular activities at the endochondral growth plate aims to show how the unique microenvironment and cell functions during the sequential phases of the chondrocyte lifecycle affect long bone elongation, a fundamental source of height growth. There are critical junctures within the chondrocytic differentiation cascade at which environmental influences are integrated and have the ability to influence progression to the hypertrophic chondrocyte phase, the primary driver of long bone elongation. Phenotypic differences in height growth patterns reflect variability in amplitude and frequency of discretely timed hypertrophic cellular expansion events, the cellular basis of saltation and stasis growth biology. Final height is a summary of the dynamic processes carried out by the growth plate cellular machinery. As these cell-level mechanisms unfold in an individual, time-specific manner, there are many critical points at which a genetic growth program can be enhanced or perturbed. Recognizing both the complexity and fluidity of this adaptive system questions the likelihood of a single, optimal growth pattern and instead identifies a larger bandwidth of saltatory frequencies for "normal" growth. Further inquiry into mechanistic sources of variability acting at critical organizational points of chondrogenesis can provide new opportunities for growth interventions. © 2017 Wiley Periodicals, Inc.

  12. On the construction, comparison, and variability of airsheds for interpreting semivolatile organic compounds in passively sampled air.

    PubMed

    Westgate, John N; Wania, Frank

    2011-10-15

    Air mass origin as determined by back trajectories often aids in explaining some of the short-term variability in the atmospheric concentrations of semivolatile organic contaminants. Airsheds, constructed by amalgamating large numbers of back trajectories, capture average air mass origins over longer time periods and thus have found use in interpreting air concentrations obtained by passive air samplers. To explore some of their key characteristics, airsheds for 54 locations on Earth were constructed and compared for roundness, seasonality, and interannual variability. To avoid the so-called "pole problem" and to simplify the calculation of roundness, a "geodesic grid" was used to bin the back-trajectory end points. Departures from roundness were seen to occur at all latitudes and to correlate significantly with local slope but no strong relationship between latitude and roundness was revealed. Seasonality and interannual variability vary widely enough to imply that static models of transport are not sufficient to describe the proximity of an area to potential sources of contaminants. For interpreting an air measurement an airshed should be generated specifically for the deployment time of the sampler, especially when investigating long-term trends. Samples taken in a single season may not represent the average annual atmosphere, and samples taken in linear, as opposed to round, airsheds may not represent the average atmosphere in the area. Simple methods are proposed to ascertain the significance of an airshed or individual cell. It is recommended that when establishing potential contaminant source regions only end points with departure heights of less than ∼700 m be considered.

  13. Monthly variations of dew point temperature in the coterminous United States

    NASA Astrophysics Data System (ADS)

    Robinson, Peter J.

    1998-11-01

    The dew point temperature, Td, data from the surface airways data set of the U.S. National Climatic Data Center were used to develop a basic dew point climatology for the coterminous United States. Quality control procedures were an integral part of the analysis. Daily Td, derived as the average of eight observations at 3-hourly intervals, for 222 stations for the 1961-1990 period were used. The annual and seasonal pattern of average values showed a clear south-north decrease in the eastern portion of the nation, a trend which was most marked in winter. In the west, values decreased inland from the Pacific Coast. Inter-annual variability was generally low when actual mean values were high. A cluster analysis suggested that the area could be divided into six regions, two oriented north-south in the west, four aligned east-west in the area east of the Rocky Mountains. Day-to-day variability was low in all seasons in the two western clusters, but showed a distinct winter maximum in the east. This was explained in broad terms by consideration of air flow regimes, with the Pacific Ocean and the Gulf of Mexico acting as the major moisture sources. Comparison of values for pairs of nearby stations suggested that Td was rather insensitive to local moisture sources. Analysis of the patterns of occurrence of dew points exceeding the 95th percentile threshold indicated that extremes in summer tend to be localized and short-lived, while in winter they are more widespread and persistent.

  14. ROSAT HRI and ASCA Observations of the Spiral Galaxy NGC 6946 and its Northeast Complex of Luminous Supernova Remnants

    NASA Technical Reports Server (NTRS)

    Schlegel, E.; Swank, Jean (Technical Monitor)

    2001-01-01

    Analysis of 80 ks ASCA (Advanced Satellite for Cosmology and Astrophysics) and 60 ks ROSAT HRI (High Resolution Image) observations of the face-on spiral galaxy NGC 6946 are presented. The ASCA image is the first observation of this galaxy above approximately 2 keV. Diffuse emission may be present in the inner approximately 4' extending to energies above approximately 2-3 keV. In the HRI data, 14 pointlike sources are detected, the brightest two being a source very close to the nucleus and a source to the northeast that corresponds to a luminous complex of interacting supernova remnants (SNRs). We detect a point source that lies approximately 30" west of the SNR complex but with a luminosity -1115 of the SNR complex. None of the point sources show evidence of strong variability; weak variability would escape our detection. The ASCA spectrum of the SNR complex shows evidence for an emission line at approximately 0.9 keV that could be either Ne IX at approximately 0.915 keV or a blend of ion stages of Fe L-shell emission if the continuum is fitted with a power law. However, a two-component, Raymond-Smith thermal spectrum with no lines gives an equally valid continuum fit and may be more physically plausible given the observed spectrum below 3 keV. Adopting this latter model, we derive a density for the SNR complex of 10-35 cm(exp -3), consistent with estimates inferred from optical emission-line ratios. The complex's extraordinary X-ray luminosity may be related more to the high density of the surrounding medium than to a small but intense interaction region where two of the complex's SNRs are apparently colliding.

  15. Farmers, Trust, and the Market Solution to Water Pollution: The Role of Social Embeddedness in Water Quality Trading

    ERIC Educational Resources Information Center

    Mariola, Matt J.

    2012-01-01

    Water quality trading (WQT) is a market arrangement in which a point-source water polluter pays farmers to implement conservation practices and claims the resulting benefits as credits toward meeting a pollution permit. Success rates of WQT programs nationwide are highly variable. Most of the literature on WQT is from an economic perspective…

  16. Einstein’s quadrupole formula from the kinetic-conformal Hořava theory

    NASA Astrophysics Data System (ADS)

    Bellorín, Jorge; Restuccia, Alvaro

    We analyze the radiative and nonradiative linearized variables in a gravity theory within the family of the nonprojectable Hořava theories, the Hořava theory at the kinetic-conformal point. There is no extra mode in this formulation, the theory shares the same number of degrees of freedom with general relativity. The large-distance effective action, which is the one we consider, can be given in a generally-covariant form under asymptotically flat boundary conditions, the Einstein-aether theory under the condition of hypersurface orthogonality on the aether vector. In the linearized theory, we find that only the transverse-traceless tensorial modes obey a sourced wave equation, as in general relativity. The rest of variables are nonradiative. The result is gauge-independent at the level of the linearized theory. For the case of a weak source, we find that the leading mode in the far zone is exactly Einstein’s quadrupole formula of general relativity, if some coupling constants are properly identified. There are no monopoles nor dipoles in this formulation, in distinction to the nonprojectable Horava theory outside the kinetic-conformal point. We also discuss some constraints on the theory arising from the observational bounds on Lorentz-violating theories.

  17. Current aspects of Salmonella contamination in the US poultry production chain and the potential application of risk strategies in understanding emerging hazards.

    PubMed

    Rajan, Kalavathy; Shi, Zhaohao; Ricke, Steven C

    2017-05-01

    One of the leading causes of foodborne illness in poultry products is Salmonella enterica. Salmonella hazards in poultry may be estimated and possible control methods modeled and evaluated through the use of quantitative microbiological risk assessment (QMRA) models and tools. From farm to table, there are many possible routes of Salmonella dissemination and contamination in poultry. From the time chicks are hatched through growth, transportation, processing, storage, preparation, and finally consumption, the product could be contaminated through exposure to different materials and sources. Examination of each step of the process is necessary as well as an examination of the overall picture to create effective countermeasures against contamination and prevent disease. QMRA simulation models can use either point estimates or probability distributions to examine variables such as Salmonella concentrations at retail or at any given point of processing to gain insight on the chance of illness due to Salmonella ingestion. For modeling Salmonella risk in poultry, it is important to look at variables such as Salmonella transfer and cross contamination during processing. QMRA results may be useful for the identification and control of critical sources of Salmonella contamination.

  18. Experimental demonstration of interferometric imaging using photonic integrated circuits.

    PubMed

    Su, Tiehui; Scott, Ryan P; Ogden, Chad; Thurman, Samuel T; Kendrick, Richard L; Duncan, Alan; Yu, Runxiang; Yoo, S J B

    2017-05-29

    This paper reports design, fabrication, and demonstration of a silica photonic integrated circuit (PIC) capable of conducting interferometric imaging with multiple baselines around λ = 1550 nm. The PIC consists of four sets of five waveguides (total of twenty waveguides), each leading to a three-band spectrometer (total of sixty waveguides), after which a tunable Mach-Zehnder interferometer (MZI) constructs interferograms from each pair of the waveguides. A total of thirty sets of interferograms (ten pairs of three spectral bands) is collected by the detector array at the output of the PIC. The optical path difference (OPD) of each interferometer baseline is kept to within 1 µm to maximize the visibility of the interference measurement. We constructed an experiment to utilize the two baselines for complex visibility measurement on a point source and a variable width slit. We used the point source to demonstrate near unity value of the PIC instrumental visibility, and used the variable slit to demonstrate visibility measurement for a simple extended object. The experimental result demonstrates the visibility of baseline 5 and 20 mm for a slit width of 0 to 500 µm in good agreement with theoretical predictions.

  19. Micro and Macroscale Drivers of Nutrient Concentrations in Urban Streams in South, Central and North America.

    PubMed

    Loiselle, Steven A; Gasparini Fernandes Cunha, Davi; Shupe, Scott; Valiente, Elsa; Rocha, Luciana; Heasley, Eleanore; Belmont, Patricia Pérez; Baruch, Avinoam

    Global metrics of land cover and land use provide a fundamental basis to examine the spatial variability of human-induced impacts on freshwater ecosystems. However, microscale processes and site specific conditions related to bank vegetation, pollution sources, adjacent land use and water uses can have important influences on ecosystem conditions, in particular in smaller tributary rivers. Compared to larger order rivers, these low-order streams and rivers are more numerous, yet often under-monitored. The present study explored the relationship of nutrient concentrations in 150 streams in 57 hydrological basins in South, Central and North America (Buenos Aires, Curitiba, São Paulo, Rio de Janeiro, Mexico City and Vancouver) with macroscale information available from global datasets and microscale data acquired by trained citizen scientists. Average sub-basin phosphate (P-PO4) concentrations were found to be well correlated with sub-basin attributes on both macro and microscales, while the relationships between sub-basin attributes and nitrate (N-NO3) concentrations were limited. A phosphate threshold for eutrophic conditions (>0.1 mg L-1 P-PO4) was exceeded in basins where microscale point source discharge points (eg. residential, industrial, urban/road) were identified in more than 86% of stream reaches monitored by citizen scientists. The presence of bankside vegetation covaried (rho = -0.53) with lower phosphate concentrations in the ecosystems studied. Macroscale information on nutrient loading allowed for a strong separation between basins with and without eutrophic conditions. Most importantly, the combination of macroscale and microscale information acquired increased our ability to explain sub-basin variability of P-PO4 concentrations. The identification of microscale point sources and bank vegetation conditions by citizen scientists provided important information that local authorities could use to improve their management of lower order river ecosystems.

  20. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  1. Micro and Macroscale Drivers of Nutrient Concentrations in Urban Streams in South, Central and North America

    PubMed Central

    Loiselle, Steven A.; Gasparini Fernandes Cunha, Davi; Shupe, Scott; Valiente, Elsa; Rocha, Luciana; Heasley, Eleanore; Belmont, Patricia Pérez; Baruch, Avinoam

    2016-01-01

    Global metrics of land cover and land use provide a fundamental basis to examine the spatial variability of human-induced impacts on freshwater ecosystems. However, microscale processes and site specific conditions related to bank vegetation, pollution sources, adjacent land use and water uses can have important influences on ecosystem conditions, in particular in smaller tributary rivers. Compared to larger order rivers, these low-order streams and rivers are more numerous, yet often under-monitored. The present study explored the relationship of nutrient concentrations in 150 streams in 57 hydrological basins in South, Central and North America (Buenos Aires, Curitiba, São Paulo, Rio de Janeiro, Mexico City and Vancouver) with macroscale information available from global datasets and microscale data acquired by trained citizen scientists. Average sub-basin phosphate (P-PO4) concentrations were found to be well correlated with sub-basin attributes on both macro and microscales, while the relationships between sub-basin attributes and nitrate (N-NO3) concentrations were limited. A phosphate threshold for eutrophic conditions (>0.1 mg L-1 P-PO4) was exceeded in basins where microscale point source discharge points (eg. residential, industrial, urban/road) were identified in more than 86% of stream reaches monitored by citizen scientists. The presence of bankside vegetation covaried (rho = –0.53) with lower phosphate concentrations in the ecosystems studied. Macroscale information on nutrient loading allowed for a strong separation between basins with and without eutrophic conditions. Most importantly, the combination of macroscale and microscale information acquired increased our ability to explain sub-basin variability of P-PO4 concentrations. The identification of microscale point sources and bank vegetation conditions by citizen scientists provided important information that local authorities could use to improve their management of lower order river ecosystems. PMID:27662192

  2. Hybrid Optimization Parallel Search PACKage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  3. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  4. Consumer-phase Salmonella enterica serovar enteritidis risk assessment for egg-containing food products.

    PubMed

    Mokhtari, Amirhossein; Moore, Christina M; Yang, Hong; Jaykus, Lee-Ann; Morales, Roberta; Cates, Sheryl C; Cowen, Peter

    2006-06-01

    We describe a one-dimensional probabilistic model of the role of domestic food handling behaviors on salmonellosis risk associated with the consumption of eggs and egg-containing foods. Six categories of egg-containing foods were defined based on the amount of egg contained in the food, whether eggs are pooled, and the degree of cooking practiced by consumers. We used bootstrap simulation to quantify uncertainty in risk estimates due to sampling error, and sensitivity analysis to identify key sources of variability and uncertainty in the model. Because of typical model characteristics such as nonlinearity, interaction between inputs, thresholds, and saturation points, Sobol's method, a novel sensitivity analysis approach, was used to identify key sources of variability. Based on the mean probability of illness, examples of foods from the food categories ranked from most to least risk of illness were: (1) home-made salad dressings/ice cream; (2) fried eggs/boiled eggs; (3) omelettes; and (4) baked foods/breads. For food categories that may include uncooked eggs (e.g., home-made salad dressings/ice cream), consumer handling conditions such as storage time and temperature after food preparation were the key sources of variability. In contrast, for food categories associated with undercooked eggs (e.g., fried/soft-boiled eggs), the initial level of Salmonella contamination and the log10 reduction due to cooking were the key sources of variability. Important sources of uncertainty varied with both the risk percentile and the food category under consideration. This work adds to previous risk assessments focused on egg production and storage practices, and provides a science-based approach to inform consumer risk communications regarding safe egg handling practices.

  5. Optimisation of gelatin extraction from Unicorn leatherjacket (Aluterus monoceros) skin waste: response surface approach.

    PubMed

    Hanjabam, Mandakini Devi; Kannaiyan, Sathish Kumar; Kamei, Gaihiamngam; Jakhar, Jitender Kumar; Chouksey, Mithlesh Kumar; Gudipati, Venkateshwarlu

    2015-02-01

    Physical properties of gelatin extracted from Unicorn leatherjacket (Aluterus monoceros) skin, which is generated as a waste from fish processing industries, were optimised using Response Surface Methodology (RSM). A Box-Behnken design was used to study the combined effects of three independent variables, namely phosphoric acid (H3PO4) concentration (0.15-0.25 M), extraction temperature (40-50 °C) and extraction time (4-12 h) on different responses like yield, gel strength and melting point of gelatin. The optimum conditions derived by RSM for the yield (10.58%) were 0.2 M H3PO4 for 9.01 h of extraction time and hot water extraction of 45.83 °C. The maximum achieved gel strength and melting point was 138.54 g and 22.61 °C respectively. Extraction time was found to be most influencing variable and had a positive coefficient on yield and negative coefficient on gel strength and melting point. The results indicated that Unicorn leatherjacket skins can be a source of gelatin having mild gel strength and melting point.

  6. The interprocess NIR sampling as an alternative approach to multivariate statistical process control for identifying sources of product-quality variability.

    PubMed

    Marković, Snežana; Kerč, Janez; Horvat, Matej

    2017-03-01

    We are presenting a new approach of identifying sources of variability within a manufacturing process by NIR measurements of samples of intermediate material after each consecutive unit operation (interprocess NIR sampling technique). In addition, we summarize the development of a multivariate statistical process control (MSPC) model for the production of enteric-coated pellet product of the proton-pump inhibitor class. By developing provisional NIR calibration models, the identification of critical process points yields comparable results to the established MSPC modeling procedure. Both approaches are shown to lead to the same conclusion, identifying parameters of extrusion/spheronization and characteristics of lactose that have the greatest influence on the end-product's enteric coating performance. The proposed approach enables quicker and easier identification of variability sources during manufacturing process, especially in cases when historical process data is not straightforwardly available. In the presented case the changes of lactose characteristics are influencing the performance of the extrusion/spheronization process step. The pellet cores produced by using one (considered as less suitable) lactose source were on average larger and more fragile, leading to consequent breakage of the cores during subsequent fluid bed operations. These results were confirmed by additional experimental analyses illuminating the underlying mechanism of fracture of oblong pellets during the pellet coating process leading to compromised film coating.

  7. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  8. NuSTAR view of the central region of M31

    NASA Astrophysics Data System (ADS)

    Stiele, H.; Kong, A. K. H.

    2018-04-01

    Our neighbouring large spiral galaxy, the Andromeda galaxy (M31 or NGC 224), is an ideal target to study the X-ray source population of a nearby galaxy. NuSTAR observed the central region of M31 in 2015 and allows studying the population of X-ray point sources at energies higher than 10 keV. Based on the source catalogue of the large XMM-Newton survey of M31, we identified counterparts to the XMM-Newton sources in the NuSTAR data. The NuSTAR data only contain sources of a brightness comparable (or even brighter) than the selected sources that have been detected in XMM-Newton data. We investigate hardness ratios, spectra, and long-term light curves of individual sources obtained from NuSTAR data. Based on our spectral studies, we suggest four sources as possible X-ray binary candidates. The long-term light curves of seven sources that have been observed more than once show low (but significant) variability.

  9. Predicting Health Care Utilization in Marginalized Populations: Black, Female, Street-based Sex Workers

    PubMed Central

    Varga, Leah M.; Surratt, Hilary L.

    2014-01-01

    Background Patterns of social and structural factors experienced by vulnerable populations may negatively affect willingness and ability to seek out health care services, and ultimately, their health. Methods The outcome variable was utilization of health care services in the previous 12 months. Using Andersen’s Behavioral Model for Vulnerable Populations, we examined self-reported data on utilization of health care services among a sample of 546 Black, street-based female sex workers in Miami, Florida. To evaluate the impact of each domain of the model on predicting health care utilization, domains were included in the logistic regression analysis by blocks using the traditional variables first and then adding the vulnerable domain variables. Findings The most consistent variables predicting health care utilization were having a regular source of care and self-rated health. The model that included only enabling variables was the most efficient model in predicting health care utilization. Conclusions Any type of resource, link, or connection to or with an institution, or any consistent point of care contributes significantly to health care utilization behaviors. A consistent and reliable source for health care may increase health care utilization and subsequently decrease health disparities among vulnerable and marginalized populations, as well as contribute to public health efforts that encourage preventive health. PMID:24657047

  10. Radial Distribution of X-Ray Point Sources Near the Galactic Center

    NASA Astrophysics Data System (ADS)

    Hong, Jae Sub; van den Berg, Maureen; Grindlay, Jonathan E.; Laycock, Silas

    2009-11-01

    We present the log N-log S and spatial distributions of X-ray point sources in seven Galactic bulge (GB) fields within 4° from the Galactic center (GC). We compare the properties of 1159 X-ray point sources discovered in our deep (100 ks) Chandra observations of three low extinction Window fields near the GC with the X-ray sources in the other GB fields centered around Sgr B2, Sgr C, the Arches Cluster, and Sgr A* using Chandra archival data. To reduce the systematic errors induced by the uncertain X-ray spectra of the sources coupled with field-and-distance-dependent extinction, we classify the X-ray sources using quantile analysis and estimate their fluxes accordingly. The result indicates that the GB X-ray population is highly concentrated at the center, more heavily than the stellar distribution models. It extends out to more than 1fdg4 from the GC, and the projected density follows an empirical radial relation inversely proportional to the offset from the GC. We also compare the total X-ray and infrared surface brightness using the Chandra and Spitzer observations of the regions. The radial distribution of the total infrared surface brightness from the 3.6 band μm images appears to resemble the radial distribution of the X-ray point sources better than that predicted by the stellar distribution models. Assuming a simple power-law model for the X-ray spectra, the closer to the GC the intrinsically harder the X-ray spectra appear, but adding an iron emission line at 6.7 keV in the model allows the spectra of the GB X-ray sources to be largely consistent across the region. This implies that the majority of these GB X-ray sources can be of the same or similar type. Their X-ray luminosity and spectral properties support the idea that the most likely candidate is magnetic cataclysmic variables (CVs), primarily intermediate polars (IPs). Their observed number density is also consistent with the majority being IPs, provided the relative CV to star density in the GB is not smaller than the value in the local solar neighborhood.

  11. Maximum power point tracking algorithm based on sliding mode and fuzzy logic for photovoltaic sources under variable environmental conditions

    NASA Astrophysics Data System (ADS)

    Atik, L.; Petit, P.; Sawicki, J. P.; Ternifi, Z. T.; Bachir, G.; Della, M.; Aillerie, M.

    2017-02-01

    Solar panels have a nonlinear voltage-current characteristic, with a distinct maximum power point (MPP), which depends on the environmental factors, such as temperature and irradiation. In order to continuously harvest maximum power from the solar panels, they have to operate at their MPP despite the inevitable changes in the environment. Various methods for maximum power point tracking (MPPT) were developed and finally implemented in solar power electronic controllers to increase the efficiency in the electricity production originate from renewables. In this paper we compare using Matlab tools Simulink, two different MPP tracking methods, which are, fuzzy logic control (FL) and sliding mode control (SMC), considering their efficiency in solar energy production.

  12. Using Mobile Monitoring to Assess Spatial Variability in Urban Air Pollution Levels: Opportunities and Challenges (Invited)

    NASA Astrophysics Data System (ADS)

    Larson, T.

    2010-12-01

    Measuring air pollution concentrations from a moving platform is not a new idea. Historically, however, most information on the spatial variability of air pollutants have been derived from fixed site networks operating simultaneously over space. While this approach has obvious advantages from a regulatory perspective, with the increasing need to understand ever finer scales of spatial variability in urban pollution levels, the use of mobile monitoring to supplement fixed site networks has received increasing attention. Here we present examples of the use of this approach: 1) to assess existing fixed-site fine particle networks in Seattle, WA, including the establishment of new fixed-site monitoring locations; 2) to assess the effectiveness of a regulatory intervention, a wood stove burning ban, on the reduction of fine particle levels in the greater Puget Sound region; and 3) to assess spatial variability of both wood smoke and mobile source impacts in both Vancouver, B.C. and Tacoma, WA. Deducing spatial information from the inherently spatio-temporal measurements taken from a mobile platform is an area that deserves further attention. We discuss the use of “fuzzy” points to address the fine-scale spatio-temporal variability in the concentration of mobile source pollutants, specifically to deduce the broader distribution and sources of fine particle soot in the summer in Vancouver, B.C. We also discuss the use of principal component analysis to assess the spatial variability in multivariate, source-related features deduced from simultaneous measurements of light scattering, light absorption and particle-bound PAHs in Tacoma, WA. With increasing miniaturization and decreasing power requirements of air monitoring instruments, the number of simultaneous measurements that can easily be made from a mobile platform is rapidly increasing. Hopefully the methods used to design mobile monitoring experiments for differing purposes, and the methods used to interpret those measurements will keep pace.

  13. NOx Emissions from Large Point Sources: Variability in Ozone Production, Resulting Health Damages and Economic Costs

    NASA Astrophysics Data System (ADS)

    Mauzerall, D. L.; Sultan, B.; Kim, N.; Bradford, D.

    2004-12-01

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O3) produced from nitrogen oxides (NOx = NO + NO2) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air Quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NOx emitted from individual sources can have on the downwind concentration of surface O3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting ozone-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used "cap and trade" approach to NOx regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NOx emissions from one place or time to another could result in large changes in the health effects due to ozone formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NOx emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.

  14. Managing Environmental Stress: An Evaluation of Environmental Management of the Long Point Sandy Barrier, Lake Erie, Canada.

    PubMed

    Kreutzwiser; Gabriel

    2000-01-01

    / This paper assesses the extent to which key geomorphic components, processes, and stresses have been reflected in the management of a coastal sandy barrier environment. The management policies and practices of selected agencies responsible for Long Point, a World Biosphere Reserve along Lake Erie, Canada, were evaluated for consistency with these principles of environmental management for sandy barriers: maintaining natural stresses essential to sandy barrier development and maintenance;protecting sediment sources, transfers, and storage; recognizing spatial variability and cyclicity of natural stresses, such as barrier overwash events; and accepting and planning for long-term evolutionary changes in the sandy barrier environment. Generally, management policies and practices have not respected the dynamic and sensitive environment of Long Point because of limited mandates of the agencies involved, inconsistent policies, and failure to apply or enforce existing policies. This is particularly evident with local municipalities and less so for the Canadian Wildlife Service, the federal agency responsible for managing National Wildlife Areas at the point. In the developed areas of Long Point, landward sediment transfers and sediment storage in dunes have been impacted by cottage development, shore protection, and maintenance of roads and parking lots. Additionally, agencies responsible for managing Long Point have no jurisdiction over sediment sources as far as 95 km away. Evolutionary change of sandy barriers poses the greatest challenge to environmental managers.

  15. The detection of carbon dioxide leaks using quasi-tomographic laser absorption spectroscopy measurements in variable wind

    DOE PAGES

    Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...

    2016-04-13

    Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less

  16. VizieR Online Data Catalog: ANS UV Catalogue of Point Sources (Wesselius+ 1982)

    NASA Astrophysics Data System (ADS)

    Wesselius, P. R.; van Duinen, R. J.; de Jonge, A. R. W.; Aalders, J. W. G.; Luinge, W.; Wildeman, K. J.

    2001-08-01

    This catalog is a result of the observations made with the Astronomical Netherlands Satellite (ANS) which operated between October 1974 and April 1976. The ANS satellite observed in five UV channels centered around 150, 180, 220, 250 and 330nm. The photometric bands are: ------------------------------------------------------------------------- - Band designation 15N 15W 18 22 25 33 ------------------------------------------------------------------------- - Central wavelength (nm) 154.5 154.9 179.9 220.0 249.3 329.4 Bandwidth (nm) 5.0 14.9 14.9 20.0 15.0 10.1 ------------------------------------------------------------------------- - The reported magnitudes were obtained from mean count rates converted to fluxes using the ANS absolute calibration of Wesselius et al. (1980A&A....85..221W). In addition to the ultraviolet magnitudes, the catalog contains positions taken from the satellite pointing, spectral types, and UBV data from other sources as well as comments on duplicity, variability, and miscellaneous notes concerning individual objects. (1 data file).

  17. Feedback data sources that inform physician self-assessment.

    PubMed

    Lockyer, Jocelyn; Armson, Heather; Chesluk, Benjamin; Dornan, Timothy; Holmboe, Eric; Loney, Elaine; Mann, Karen; Sargeant, Joan

    2011-01-01

    Self-assessment is a process of interpreting data about one's performance and comparing it to explicit or implicit standards. To examine the external data sources physicians used to monitor themselves. Focus groups were conducted with physicians who participated in three practice improvement activities: a multisource feedback program; a program providing patient and chart audit data; and practice-based learning groups. We used grounded theory strategies to understand the external sources that stimulated self-assessment and how they worked. Data from seven focus groups (49 physicians) were analyzed. Physicians used information from structured programs, other educational activities, professional colleagues, and patients. Data were of varying quality, often from non-formal sources with implicit (not explicit) standards. Mandatory programs elicited variable responses, whereas data and activities the physicians selected themselves were more likely to be accepted. Physicians used the information to create a reference point against which they could weigh their performance using it variably depending on their personal interpretation of its accuracy, application, and utility. Physicians use and interpret data and standards of varying quality to inform self-assessment. Physicians may benefit from regular and routine feedback and guidance on how to seek out data for self-assessment.

  18. Injecting Artificial Memory Errors Into a Running Computer Program

    NASA Technical Reports Server (NTRS)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  19. Greenhouse Gas Sensing Using Small Unmanned Aerial Systems - Field Experiment Results and Future Directions

    NASA Astrophysics Data System (ADS)

    Aubrey, A. D.; Christensen, L. E.; Brockers, R.; Thompson, D. R.

    2014-12-01

    Requirements for greenhouse gas point source detection and quantification often require high spatial resolution on the order of meters. These applications, which help close the gap in emissions estimate uncertainties, also demand sensing with high sensitivity and in a fashion that accounts for spatiotemporal variability on the order of seconds to minutes. Low-cost vertical takeoff and landing (VTOL) small unmanned aerial systems (sUAS) provide a means to detect and identify the location of point source gas emissions while offering ease of deployment and high maneuverability. Our current fielded gas sensing sUAS platforms are able to provide instantaneous in situ concentration measurements at locations within line of sight of the operator. Recent results from field experiments demonstrating methane detection and plume characterization will be discussed here, including performance assessment conducted via a controlled release experiment in 2013. The logical extension of sUAS gas concentration measurement is quantification of flux rate. We will discuss the preliminary strategy for quantitative flux determination, including intrinsic challenges and heritage from airborne science campaigns, associated with this point source flux quantification. This system approach forms the basis for intelligent autonomous quantitative characterization of gas plumes, which holds great value for applications in commercial, regulatory, and safety environments.

  20. 3D Seismic Imaging using Marchenko Methods

    NASA Astrophysics Data System (ADS)

    Lomas, A.; Curtis, A.

    2017-12-01

    Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.

  1. Study on Variability and Spectral Properties of Blazar 3C 273 with Long-term Multi-band Optical Monitoring from 2006 to 2015

    NASA Astrophysics Data System (ADS)

    Zeng, Wei; Zhao, Qing-Jiang; Dai, Ben-Zhong; Jiang, Ze-Jun; Geng, Xiong-Fei; Yang, Shen-Bang; Liu, Zhen; Wang, Dong-Dong; Feng, Zhang-Jing; Zhang, Li

    2018-02-01

    We present long-term optical multi-band photometric monitoring of blazar 3C 273, from 2006 May 19 to 2015 March 31 with high temporal resolution in the BVRI bands. The source is in a steady state and showed very small variability, with the values of the fractional variability amplitude of {F}{var}=0.457+/- 0.014 % , 0.391+/- 0.012 % , 0.264+/- 0.043 % and 0.460+/- 0.014 % in B, V, R and I, respectively. The intra-night point-to-point fractional variability (F pp ) in each band is below 1.0%, and the F pp variation amplitude increase from the B-band to the I-band. We find a variability with the timescale of 5.8 ± 2.9 minutes in the I-band on 2009 March 11. This fast variability requires the comoving magnetic field strength in the jet above 18 G with a Doppler factor {δ }D∼ 10. Using the discrete correlation function (DCF), the B- and I-band light curves are examined for correlation on whole campaign. Low significance (∼99.73 percent confidence) correlations with the I-band lags the B-band variations are observed. The spectral behaviors in the different variability episodes are studied. “Bluer-when-brighter” spectral behavior is presented for the whole campaign, while there is an opposite tendency when {{{F}}}V> 30.2 {mJy}. The weak of the correlation between B- and I-band and the spectrum analysis indicate that the optical radiation consists of two variable components.

  2. Vagal-dependent nonlinear variability in the respiratory pattern of anesthetized, spontaneously breathing rats

    PubMed Central

    Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.

    2011-01-01

    Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661

  3. Temporal variability in domestic point source discharges and their associated impact on receiving waters.

    PubMed

    Richards, Samia; Withers, Paul J A; Paterson, Eric; McRoberts, Colin W; Stutter, Marc

    2016-11-15

    Discharges from the widely distributed small point sources of pollutants such as septic tanks contribute to microbial and nutrient loading of streams and can pose risks to human health and stream ecology, especially during periods of ecological sensitivity. Here we present the first comprehensive data on the compositional variability of septic tank effluents (STE) as a potential source of water pollution during different seasons and the associated links to their influence on stream waters. To determine STE parameters and nutrient variations, the biological and physicochemical properties of effluents sampled quarterly from 12 septic tank systems were investigated with concurrent analyses of upstream and downstream receiving waters. The study revealed that during the warmer dryer months of spring and summer, effluents were similar in composition, as were the colder wetter months of autumn and winter. However, spring/summer effluents differed significantly (P<0.05) from autumn/winter for concentrations of biological oxygen demand (BOD), arsenic, barium (Ba), cobalt, chromium, manganese, strontium (Sr), titanium, tungsten (W) and zinc (Zn). With the exception of BOD, Ba and Sr which were greater in summer and spring, the concentrations of these parameters were greater in winter. Receiving stream waters also showed significant seasonal variation (P≤0.05) in alkalinity, BOD, dissolved organic carbon, sulphate, sulphur, lithium, W, Zn and Escherichiacoli abundance. There was a clear significant influence of STE on downstream waters relative to upstream from the source (P<0.05) for total suspended solids, total particulate P and N, ammonium-N, coliforms and E. coli. The findings of this study found seasonal variation in STE and place effluent discharges as a factor affecting adjacent stream quality and call for appropriate measures to reduce or redirect STE discharges away from water courses. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A Deep XMM-Newton Survey of M33: Point-source Catalog, Source Detection, and Characterization of Overlapping Fields

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin F.; Wold, Brian; Haberl, Frank; Garofali, Kristen; Blair, William P.; Gaetz, Terrance J.; Kuntz, K. D.; Long, Knox S.; Pannuti, Thomas G.; Pietsch, Wolfgang; Plucinsky, Paul P.; Winkler, P. Frank

    2015-05-01

    We have obtained a deep 8 field XMM-Newton mosaic of M33 covering the galaxy out to the D25 isophote and beyond to a limiting 0.2-4.5 keV unabsorbed flux of 5 × 10-16 erg cm-2 s-1 (L \\gt 4 × 1034 erg s-1 at the distance of M33). These data allow complete coverage of the galaxy with high sensitivity to soft sources such as diffuse hot gas and supernova remnants (SNRs). Here, we describe the methods we used to identify and characterize 1296 point sources in the 8 fields. We compare our resulting source catalog to the literature, note variable sources, construct hardness ratios, classify soft sources, analyze the source density profile, and measure the X-ray luminosity function (XLF). As a result of the large effective area of XMM-Newton below 1 keV, the survey contains many new soft X-ray sources. The radial source density profile and XLF for the sources suggest that only ˜15% of the 391 bright sources with L \\gt 3.6 × 1035 erg s-1 are likely to be associated with M33, and more than a third of these are known SNRs. The log(N)-log(S) distribution, when corrected for background contamination, is a relatively flat power law with a differential index of 1.5, which suggests that many of the other M33 sources may be high-mass X-ray binaries. Finally, we note the discovery of an interesting new transient X-ray source, which we are unable to classify.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butt, Y M; Romero, G E; Torres, D F

    We suggest that ultraluminous X-ray sources (ULXs) and some of the variable low latitude EGRET gamma-ray sources may be two different manifestations of the same underlying phenomena: high-mass microquasars with relativistic jets forming a small angle with the line of sight (i.e. microblazars). Microblazars with jets formed by relatively cool plasma (Lorentz factors for the leptons up to a few hundreds) naturally lead to ULXs. If the jet contains very energetic particles (high-energy cutoff above Lorentz factors of several thousands) the result is a relatively strong gamma-ray source. As pointed out by Kaufman Bernads, Romero & Mirabel (2002), a gamma-raymore » microblazar will always have an X-ray counterpart (although it might be relatively weak), whereas X-ray microblazars might have no gamma-ray counterparts.« less

  6. Nustar and Chandra Insight into the Nature of the 3-40 Kev Nuclear Emission in Ngc 253

    NASA Technical Reports Server (NTRS)

    Lehmer, Bret D.; Wik, Daniel R.; Hornschemeier, Ann E.; Ptak, Andrew; Antoniu, V.; Argo, M.K.; Bechtol, K.; Boggs, S.; Christensen, F.E.; Craig, W.W.; hide

    2013-01-01

    We present results from three nearly simultaneous Nuclear Spectroscopic Telescope Array (NuSTAR) and Chandra monitoring observations between 2012 September 2 and 2012 November 16 of the local star-forming galaxy NGC 253. The 3-40 kiloelectron volt intensity of the inner approximately 20 arcsec (approximately 400 parsec) nuclear region, as measured by NuSTAR, varied by a factor of approximately 2 across the three monitoring observations. The Chandra data reveal that the nuclear region contains three bright X-ray sources, including a luminous (L (sub 2-10 kiloelectron volt) approximately few × 10 (exp 39) erg per s) point source located approximately 1 arcsec from the dynamical center of the galaxy (within the sigma 3 positional uncertainty of the dynamical center); this source drives the overall variability of the nuclear region at energies greater than or approximately equal to 3 kiloelectron volts. We make use of the variability to measure the spectra of this single hard X-ray source when it was in bright states. The spectra are well described by an absorbed (power-law model spectral fit value, N(sub H), approximately equal to 1.6 x 10 (exp 23) per square centimeter) broken power-law model with spectral slopes and break energies that are typical of ultraluminous X-ray sources (ULXs), but not active galactic nuclei (AGNs). A previous Chandra observation in 2003 showed a hard X-ray point source of similar luminosity to the 2012 source that was also near the dynamical center (Phi is approximately equal to 0.4 arcsec); however, this source was offset from the 2012 source position by approximately 1 arcsec. We show that the probability of the 2003 and 2012 hard X-ray sources being unrelated is much greater than 99.99% based on the Chandra spatial localizations. Interestingly, the Chandra spectrum of the 2003 source (3-8 kiloelectron volts) is shallower in slope than that of the 2012 hard X-ray source. Its proximity to the dynamical center and harder Chandra spectrum indicate that the 2003 source is a better AGN candidate than any of the sources detected in our 2012 campaign; however, we were unable to rule out a ULX nature for this source. Future NuSTAR and Chandra monitoring would be well equipped to break the degeneracy between the AGN and ULX nature of the 2003 source, if again caught in a high state.

  7. One Year of Monthly N and O Isotope Measurements in Nitrate from 18 Streamwater Monitoring Stations Within the Predominantly Pastoral Upper Manawatu Catchment, New Zealand

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.; Douence, C.

    2010-12-01

    New Zealand's intensive pastoral agricultural systems have a significant impact on water quality due to nitrogen loading in rivers. A research programme has been designed to develop indicators of the sources and denitrification losses of nitrate in streamwater. This work describes the results of one year of monthly measurements at ~18 monitoring locations in the 1260 square km upper Manawatu River catchment. The catchment was chosen for study because it is among the most pastoral catchments in New Zealand, with little non-pastoral agriculture and limited forest area outside of the Tararua mountain range on the west side of the catchment. The use of N and O isotope ratios in nitrate has considerable potential to elucidate the sources and fate of nitrate with greater precision than in most other nations due to the lack of nitrate in atmospheric deposition and the lack of nitrates used as fertilizer. We measured N and O isotope ratios in nitrate plus nitrite using cadmium and azide chemical denitrification method, and refer to the results as nitrate for brevity due to low nitrite concentrations. When examined as annual averages at each monitoring site, we found the lowest N and O isotope ratios in our only site draining native forest. All agricultural monitoring sites sit approximately on a 1:1 line, enriched in N-15 and O-18 by 2-6 per mil relative to the native forest subcatchment. The three main effluent point sources in the catchment demonstrated unexpected variability in isotope ratios. Two modern sewage treatment ponds had N and O isotope ratios close to those found in agricultural catchments, while a closed meat freezing factory effluent pond had isotope ratios strongly enriched in N-15 and O-18. The lack of summer low flows during monitoring period, combined with the variability in isotope ratios from point source, appeared to be responsible for our inability to clearly detect the effect of point sources in the isotope data from stations upstream and downstream of the point source inputs. Month-to-month variation in some catchments sat near the 1:1 line expected for denitrification as the primary driver of variability in isotope ratios, but the data from many stations including river's main stem was more complex. Overall, we are hopeful about the potential for the development of isotope indicators as planned. Specifically, our results tentatively support the use of the O isotope composition of soil water as a function of elevation and irrigation, and N isotope composition of soil N as a function of agricultural intensity driving the use of N and O isotopes to identify sources. While diffusion processes appear to suppress the isotope effect associated with denitrification, it may be observable and consistent in smaller and more uniform subcatchments. These smaller subcatchments will therefore become an increasing focus of our study. If successful, the indicators we intend to develop have the potential to work within a nitrogen cap and trade scheme for the catchment, providing an important efficiency tool to enable agriculture intensification in areas of effective N removal while targeting areas of poor nitrogen removal for decreased agricultural intensity or alternate land uses.

  8. The infrared counterpart of the eclipsing X-ray binary HO253 + 193

    NASA Technical Reports Server (NTRS)

    Zuckerman, B.; Becklin, E. E.; Mclean, I. S.; Patterson, Joseph

    1992-01-01

    We report the identification of the infrared counterpart of the pulsating X-ray source HO253 + 193. It is a highly reddened star varying in K light with a period near 3 hr, but an apparent even-odd effect in the light curve implies that the true period is 6.06 hr. Together with the recent report of X-ray eclipses at the latter period, this establishes the close binary nature of the source. Infrared minimum occurs at X-ray minimum, certifying that the infrared variability arises from the tidal distortion of the lobe-filling secondary. The absence of a point source at radio wavelengths, plus the distance derived from the infrared data, suggests that the binary system is accidentally located behind the dense core of the molecular cloud Lynds 1457. The eclipses and pulsations in the X-ray light curve, coupled with the hard X-ray spectrum and low luminosity, demonstrate that HO253 + 193 contains an accreting magnetic white dwarf, and hence belongs to the 'DQ Herculis' class of cataclysmic variables.

  9. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  10. Highlighting Uncertainty and Recommendations for Improvement of Black Carbon Biomass Fuel-Based Emission Inventories in the Indo-Gangetic Plain Region.

    PubMed

    Soneja, Sutyajeet I; Tielsch, James M; Khatry, Subarna K; Curriero, Frank C; Breysse, Patrick N

    2016-03-01

    Black carbon (BC) is a major contributor to hydrological cycle change and glacial retreat within the Indo-Gangetic Plain (IGP) and surrounding region. However, significant variability exists for estimates of BC regional concentration. Existing inventories within the IGP suffer from limited representation of rural sources, reliance on idealized point source estimates (e.g., utilization of emission factors or fuel-use estimates for cooking along with demographic information), and difficulty in distinguishing sources. Inventory development utilizes two approaches, termed top down and bottom up, which rely on various sources including transport models, emission factors, and remote sensing applications. Large discrepancies exist for BC source attribution throughout the IGP depending on the approach utilized. Cooking with biomass fuels, a major contributor to BC production has great source apportionment variability. Areas requiring attention tied to research of cookstove and biomass fuel use that have been recognized to improve emission inventory estimates include emission factors, particulate matter speciation, and better quantification of regional/economic sectors. However, limited attention has been given towards understanding ambient small-scale spatial variation of BC between cooking and non-cooking periods in low-resource environments. Understanding the indoor to outdoor relationship of BC emissions due to cooking at a local level is a top priority to improve emission inventories as many health and climate applications rely upon utilization of accurate emission inventories.

  11. Infrared studies of galactic center x-ray sources

    NASA Astrophysics Data System (ADS)

    DeWitt, Curtis

    In this dissertation I use a variety of approaches to discover the nature of a subset of the nearly 10,000 X-ray point sources in the 2° x 0.8° region around the Galactic Center. I produced a JHK s source catalog of the 170 x170 region around Sgr A* an area containing 4339 of these X-ray sources, with the ISPI camera on the CTIO 4-m telescope. I cross-correlated the Chandra and ISPI catalogs to find potential near-infrared (NIR) counterparts to the X-ray sources. The extreme NIR source crowding in the field means that it is not possible to establish the authenticity of the matches with astrometry and photometry alone. I found 2137 IR/X-ray astrometrically matched sources; statistically I calculated that my catalog contains 289+/-13 true matches to soft X-ray sources and 154 +/- 39 matches to hard X-ray sources. However, the fraction of matches to hard sources that are spurious is 90%, compared to 40% for soft source matches, making the hard source NIR matches particularly challenging for spectroscopic follow-up. I statistically investigated the parameter space of matched sources and identified a set of 98 NIR matches to hard X-ray sources with reddenings consistent with the GC distance which have a 45% probability of being true counterparts. I created two additional photometric catalogs of the GC region to investigate the variability of X-ray counterparts over a time baseline of several years. I found 48 variable NIR sources matched to X-ray sources, with 2 spectroscopically confirmed to be true counterparts (1 in previous literature and one in this study). I took spectra of 46 of my best candidates for counterparts to X-ray sources toward the GC, and spectroscopically confirmed 4 sources as the authentic physical counterpart on the basis of emission lines in the H and K band spectra. These sources include a Be high mass X-ray binary located 16 pc in projection away from Sgr A*; a hard X-ray symbiotic binary located 22 pc in projection from Sgr A*; an O-type supergiant at an distance of 3.7 kpc; and an O star at the Galactic Center distance. I also identified 3 foreground X-ray source counterparts within a distance of 1 kpc which do not show obvious emission features in their spectra. However, on the basis of the low surface density of unreddened sources along the line-of-sight to the Galactic Center and our previous statistical analysis (DeWitt et al., 2010), these can be securely identified as the true counterparts to their coincident X-ray point sources. Lastly, I used the results of my matching simulations to infer the presence of 7+/-2 true counterparts within a set of late type giants that I observed without detectable emission features. I conclude from this work that the probable excess in red giant X-ray counterparts without emission lines needs to be confirmed both with larger samples of spectroscopically surveyed counterparts and more advanced statistical simulations of the match authenticity. Also, the nature of the compact object in two of my counterpart discoveries, the Be HMXB and the symbiotic binary, can be strongly constrained with X-ray spectral fitting. Lastly, I conclude that spectroscopic surveys for new X-ray source counterparts in the GC may be able to increase their efficiency by specifically targeting photometric variables and very close astrometric matches of IR/X-ray sources.

  12. THE CELESTIAL REFERENCE FRAME AT 24 AND 43 GHz. II. IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charlot, P.; Boboltz, D. A.; Fey, A. L.

    2010-05-15

    We have measured the submilliarcsecond structure of 274 extragalactic sources at 24 and 43 GHz in order to assess their astrometric suitability for use in a high-frequency celestial reference frame (CRF). Ten sessions of observations with the Very Long Baseline Array have been conducted over the course of {approx}5 years, with a total of 1339 images produced for the 274 sources. There are several quantities that can be used to characterize the impact of intrinsic source structure on astrometric observations including the source flux density, the flux density variability, the source structure index, the source compactness, and the compactness variability.more » A detailed analysis of these imaging quantities shows that (1) our selection of compact sources from 8.4 GHz catalogs yielded sources with flux densities, averaged over the sessions in which each source was observed, of about 1 Jy at both 24 and 43 GHz, (2) on average the source flux densities at 24 GHz varied by 20%-25% relative to their mean values, with variations in the session-to-session flux density scale being less than 10%, (3) sources were found to be more compact with less intrinsic structure at higher frequencies, and (4) variations of the core radio emission relative to the total flux density of the source are less than 8% on average at 24 GHz. We conclude that the reduction in the effects due to source structure gained by observing at higher frequencies will result in an improved CRF and a pool of high-quality fiducial reference points for use in spacecraft navigation over the next decade.« less

  13. Toward a probabilistic acoustic emission source location algorithm: A Bayesian approach

    NASA Astrophysics Data System (ADS)

    Schumacher, Thomas; Straub, Daniel; Higgins, Christopher

    2012-09-01

    Acoustic emissions (AE) are stress waves initiated by sudden strain releases within a solid body. These can be caused by internal mechanisms such as crack opening or propagation, crushing, or rubbing of crack surfaces. One application for the AE technique in the field of Structural Engineering is Structural Health Monitoring (SHM). With piezo-electric sensors mounted to the surface of the structure, stress waves can be detected, recorded, and stored for later analysis. An important step in quantitative AE analysis is the estimation of the stress wave source locations. Commonly, source location results are presented in a rather deterministic manner as spatial and temporal points, excluding information about uncertainties and errors. Due to variability in the material properties and uncertainty in the mathematical model, measures of uncertainty are needed beyond best-fit point solutions for source locations. This paper introduces a novel holistic framework for the development of a probabilistic source location algorithm. Bayesian analysis methods with Markov Chain Monte Carlo (MCMC) simulation are employed where all source location parameters are described with posterior probability density functions (PDFs). The proposed methodology is applied to an example employing data collected from a realistic section of a reinforced concrete bridge column. The selected approach is general and has the advantage that it can be extended and refined efficiently. Results are discussed and future steps to improve the algorithm are suggested.

  14. Discovery of a Nonblazar Gamma-Ray Transient Source Near the Galactic Plane: GRO J1838-04

    NASA Technical Reports Server (NTRS)

    Tavani, M.; Oliversen, Ronald (Technical Monitor)

    2001-01-01

    We report the discovery of a remarkable gamma-ray transient source near the Galactic plane, GRO J1838-04. This source was serendipitously discovered by EGRET in 1995 June with a peak intensity of approx. (4 +/- 1) x 10(exp -6) photons/sq cm s (for photon energies larger than 100 MeV) and a 5.9 sigma significance. At that time, GRO J1838-04 was the second brightest gamma-ray source in the sky. A subsequent EGRET pointing in 1995 late September detected the source at a flux smaller than its peak value by a factor of approx. 7. We determine that no radio-loud spectrally flat blazar is within the error box of GRO J1838-04. We discuss the origin of the gamma-ray transient source and show that interpretations in terms of active galactic nuclei or isolated pulsars are highly problematic. GRO J1838-04 provides strong evidence for the existence of a new class of variable gamma-ray sources.

  15. Demonstration of Technologies for Remote and in Situ Sensing of Atmospheric Methane Abundances - a Controlled Release Experiment

    NASA Astrophysics Data System (ADS)

    Aubrey, A. D.; Thorpe, A. K.; Christensen, L. E.; Dinardo, S.; Frankenberg, C.; Rahn, T. A.; Dubey, M.

    2013-12-01

    It is critical to constrain both natural and anthropogenic sources of methane to better predict the impact on global climate change. Critical technologies for this assessment include those that can detect methane point and concentrated diffuse sources over large spatial scales. Airborne spectrometers can potentially fill this gap for large scale remote sensing of methane while in situ sensors, both ground-based and mounted on aerial platforms, can monitor and quantify at small to medium spatial scales. The Jet Propulsion Laboratory (JPL) and collaborators recently conducted a field test located near Casper, WY, at the Rocky Mountain Oilfield Test Center (RMOTC). These tests were focused on demonstrating the performance of remote and in situ sensors for quantification of point-sourced methane. A series of three controlled release points were setup at RMOTC and over the course of six experiment days, the point source flux rates were varied from 50 LPM to 2400 LPM (liters per minute). During these releases, in situ sensors measured real-time methane concentration from field towers (downwind from the release point) and using a small Unmanned Aerial System (sUAS) to characterize spatiotemporal variability of the plume structure. Concurrent with these methane point source controlled releases, airborne sensor overflights were conducted using three aircraft. The NASA Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) participated with a payload consisting of a Fourier Transform Spectrometer (FTS) and an in situ methane sensor. Two imaging spectrometers provided assessment of optical and thermal infrared detection of methane plumes. The AVIRIS-next generation (AVIRIS-ng) sensor has been demonstrated for detection of atmospheric methane in the short wave infrared region, specifically using the absorption features at ~2.3 μm. Detection of methane in the thermal infrared region was evaluated by flying the Hyperspectral Thermal Emission Spectrometer (HyTES), retrievals which interrogate spectral features in the 7.5 to 8.5 μm region. Here we discuss preliminary results from the JPL activities during the RMOTC controlled release experiment, including capabilities of airborne sensors for total columnar atmospheric methane detection and comparison to results from ground measurements and dispersion models. Potential application areas for these remote sensing technologies include assessment of anthropogenic and natural methane sources over wide spatial scales that represent significant unconstrained factors to the global methane budget.

  16. Statistical approaches for the determination of cut points in anti-drug antibody bioassays.

    PubMed

    Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A

    2015-03-01

    Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Mineral Element Contents in Commercially Valuable Fish Species in Spain

    PubMed Central

    Peña-Rivas, Luis; Ortega, Eduardo; López-Martínez, Concepción; Olea-Serrano, Fátima; Lorenzo, Maria Luisa

    2014-01-01

    The aim of this study was to measure selected metal concentrations in Trachurus trachurus, Trachurus picturatus, and Trachurus mediterraneus, which are widely consumed in Spain. Principal component analysis suggested that the variable Cr was the main responsible variable for the identification of T. trachurus, the variables As and Sn for T. mediterraneus, and the rest of variables for T. picturatus. This well-defined discrimination between fish species provided by mineral element allows us to distinguish them on the basis of their metal content. Based on the samples collected, and recognizing the inferential limitation of the sample size of this study, the metal concentrations found are below the proposed limit values for human consumption. However, it should be taken into consideration that there are other dietary sources of these metals. In conclusion, metal contents in the fish species analyzed are acceptable for human consumption from a nutritional and toxicity point of view. PMID:24895678

  18. Environmental risk perception, environmental concern and propensity to participate in organic farming programmes.

    PubMed

    Toma, Luiza; Mathijs, Erik

    2007-04-01

    This paper aims to identify the factors underlying farmers' propensity to participate in organic farming programmes in a Romanian rural region that confronts non-point source pollution. For this, we employ structural equation modelling with latent variables using a specific data set collected through an agri-environmental farm survey in 2001. The model includes one 'behavioural intention' latent variable ('propensity to participate in organic farming programmes') and five 'attitude' and 'socio-economic' latent variables ('socio-demographic characteristics', 'economic characteristics', 'agri-environmental information access', 'environmental risk perception' and 'general environmental concern'). The results indicate that, overall, the model has an adequate fit to the data. All loadings are statistically significant, supporting the theoretical basis for assignment of indicators for each latent variable. The significance tests for the structural model parameters show 'environmental risk perception' as the strongest determinant of farmers' propensity to participate in organic farming programmes.

  19. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%

    PubMed Central

    Turnbull, Jocelyn Christine; Keller, Elizabeth D.; Norris, Margaret W.; Wiltshire, Rachael M.

    2016-01-01

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 (14CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric 14CO2. These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions. PMID:27573818

  20. Independent evaluation of point source fossil fuel CO2 emissions to better than 10%.

    PubMed

    Turnbull, Jocelyn Christine; Keller, Elizabeth D; Norris, Margaret W; Wiltshire, Rachael M

    2016-09-13

    Independent estimates of fossil fuel CO2 (CO2ff) emissions are key to ensuring that emission reductions and regulations are effective and provide needed transparency and trust. Point source emissions are a key target because a small number of power plants represent a large portion of total global emissions. Currently, emission rates are known only from self-reported data. Atmospheric observations have the potential to meet the need for independent evaluation, but useful results from this method have been elusive, due to challenges in distinguishing CO2ff emissions from the large and varying CO2 background and in relating atmospheric observations to emission flux rates with high accuracy. Here we use time-integrated observations of the radiocarbon content of CO2 ((14)CO2) to quantify the recently added CO2ff mole fraction at surface sites surrounding a point source. We demonstrate that both fast-growing plant material (grass) and CO2 collected by absorption into sodium hydroxide solution provide excellent time-integrated records of atmospheric (14)CO2 These time-integrated samples allow us to evaluate emissions over a period of days to weeks with only a modest number of measurements. Applying the same time integration in an atmospheric transport model eliminates the need to resolve highly variable short-term turbulence. Together these techniques allow us to independently evaluate point source CO2ff emission rates from atmospheric observations with uncertainties of better than 10%. This uncertainty represents an improvement by a factor of 2 over current bottom-up inventory estimates and previous atmospheric observation estimates and allows reliable independent evaluation of emissions.

  1. Impact from Magnitude-Rupture Length Uncertainty on Seismic Hazard and Risk

    NASA Astrophysics Data System (ADS)

    Apel, E. V.; Nyst, M.; Kane, D. L.

    2015-12-01

    In probabilistic seismic hazard and risk assessments seismic sources are typically divided into two groups: fault sources (to model known faults) and background sources (to model unknown faults). In areas like the Central and Eastern United States and Hawaii the hazard and risk is driven primarily by background sources. Background sources can be modeled as areas, points or pseudo-faults. When background sources are modeled as pseudo-faults, magnitude-length or magnitude-area scaling relationships are required to construct these pseudo-faults. However the uncertainty associated with these relationships is often ignored or discarded in hazard and risk models, particularly when faults sources are the dominant contributor. Conversely, in areas modeled only with background sources these uncertainties are much more significant. In this study we test the impact of using various relationships and the resulting epistemic uncertainties on the seismic hazard and risk in the Central and Eastern United States and Hawaii. It is common to use only one magnitude length relationship when calculating hazard. However, Stirling et al. (2013) showed that for a given suite of magnitude-rupture length relationships the variability can be quite large. The 2014 US National Seismic Hazard Maps (Petersen et al., 2014) used one magnitude-rupture length relationship (Somerville, et al., 2001) in the Central and Eastern United States, and did not consider variability in the seismogenic rupture plane width. Here we use a suite of metrics to compare the USGS approach with these variable uncertainty models to assess 1) the impact on hazard and risk and 2) the epistemic uncertainty associated with choice of relationship. In areas where the seismic hazard is dominated by larger crustal faults (e.g. New Madrid) the choice of magnitude-rupture length relationship has little impact on the hazard or risk. However away from these regions, the choice of relationship is more significant and may approach the size of the uncertainty associated with the ground motion prediction equation suite.

  2. Azimuthal Dependence of the Ground Motion Variability from Scenario Modeling of the 2014 Mw6.0 South Napa, California, Earthquake Using an Advanced Kinematic Source Model

    NASA Astrophysics Data System (ADS)

    Gallovič, F.

    2017-09-01

    Strong ground motion simulations require physically plausible earthquake source model. Here, I present the application of such a kinematic model introduced originally by Ruiz et al. (Geophys J Int 186:226-244, 2011). The model is constructed to inherently provide synthetics with the desired omega-squared spectral decay in the full frequency range. The source is composed of randomly distributed overlapping subsources with fractal number-size distribution. The position of the subsources can be constrained by prior knowledge of major asperities (stemming, e.g., from slip inversions), or can be completely random. From earthquake physics point of view, the model includes positive correlation between slip and rise time as found in dynamic source simulations. Rupture velocity and rise time follows local S-wave velocity profile, so that the rupture slows down and rise times increase close to the surface, avoiding unrealistically strong ground motions. Rupture velocity can also have random variations, which result in irregular rupture front while satisfying the causality principle. This advanced kinematic broadband source model is freely available and can be easily incorporated into any numerical wave propagation code, as the source is described by spatially distributed slip rate functions, not requiring any stochastic Green's functions. The source model has been previously validated against the observed data due to the very shallow unilateral 2014 Mw6 South Napa, California, earthquake; the model reproduces well the observed data including the near-fault directivity (Seism Res Lett 87:2-14, 2016). The performance of the source model is shown here on the scenario simulations for the same event. In particular, synthetics are compared with existing ground motion prediction equations (GMPEs), emphasizing the azimuthal dependence of the between-event ground motion variability. I propose a simple model reproducing the azimuthal variations of the between-event ground motion variability, providing an insight into possible refinement of GMPEs' functional forms.

  3. Spatial and temporal variability of contaminants within estuarine sediments and native Olympia oysters: A contrast between a developed and an undeveloped estuary

    USGS Publications Warehouse

    Granek, Elise F.; Conn, Kathleen E.; Nilsen, Elena B.; Pillsbury, Lori; Strecker, Angela L.; Rumrill, Steve; Fish, William

    2016-01-01

    Chemical contaminants can be introduced into estuarine and marine ecosystems from a variety of sources including wastewater, agriculture and forestry practices, point and non-point discharges, runoff from industrial, municipal, and urban lands, accidental spills, and atmospheric deposition. The diversity of potential sources contributes to the likelihood of contaminated marine waters and sediments and increases the probability of uptake by marine organisms. Despite widespread recognition of direct and indirect pathways for contaminant deposition and organismal exposure in coastal systems, spatial and temporal variability in contaminant composition, deposition, and uptake patterns are still poorly known. We investigated these patterns for a suite of persistent legacy contaminants including polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs) and chemicals of emerging concern including pharmaceuticals within two Oregon coastal estuaries (Coos and Netarts Bays). In the more urbanized Coos Bay, native Olympia oyster (Ostrea lurida) tissue had approximately twice the number of PCB congeners at over seven times the total concentration, yet fewer PBDEs at one-tenth the concentration as compared to the more rural Netarts Bay. Different pharmaceutical suites were detected during each sampling season. Variability in contaminant types and concentrations across seasons and between species and media (organisms versus sediment) indicates the limitation of using indicator species and/or sampling annually to determine contaminant loads at a site or for specific species. The results indicate the prevalence of legacy contaminants and CECs in relatively undeveloped coastal environments highlighting the need to improve policy and management actions to reduce contaminant releases into estuarine and marine waters and to deal with legacy compounds that remain long after prohibition of use. Our results point to the need for better understanding of the ecological and human health risks of exposure to the diverse cocktail of pollutants and harmful compounds that will continue to leach from estuarine sediments over time.

  4. A test of a linear model of glaucomatous structure-function loss reveals sources of variability in retinal nerve fiber and visual field measurements.

    PubMed

    Hood, Donald C; Anderson, Susan C; Wall, Michael; Raza, Ali S; Kardon, Randy H

    2009-09-01

    Retinal nerve fiber (RNFL) thickness and visual field loss data from patients with glaucoma were analyzed in the context of a model, to better understand individual variation in structure versus function. Optical coherence tomography (OCT) RNFL thickness and standard automated perimetry (SAP) visual field loss were measured in the arcuate regions of one eye of 140 patients with glaucoma and 82 normal control subjects. An estimate of within-individual (measurement) error was obtained by repeat measures made on different days within a short period in 34 patients and 22 control subjects. A linear model, previously shown to describe the general characteristics of the structure-function data, was extended to predict the variability in the data. For normal control subjects, between-individual error (individual differences) accounted for 87% and 71% of the total variance in OCT and SAP measures, respectively. SAP within-individual error increased and then decreased with increased SAP loss, whereas OCT error remained constant. The linear model with variability (LMV) described much of the variability in the data. However, 12.5% of the patients' points fell outside the 95% boundary. An examination of these points revealed factors that can contribute to the overall variability in the data. These factors include epiretinal membranes, edema, individual variation in field-to-disc mapping, and the location of blood vessels and degree to which they are included by the RNFL algorithm. The model and the partitioning of within- versus between-individual variability helped elucidate the factors contributing to the considerable variability in the structure-versus-function data.

  5. Investigating the effects of point source and nonpoint source pollution on the water quality of the East River (Dongjiang) in South China

    USGS Publications Warehouse

    Wu, Yiping; Chen, Ji

    2013-01-01

    Understanding the physical processes of point source (PS) and nonpoint source (NPS) pollution is critical to evaluate river water quality and identify major pollutant sources in a watershed. In this study, we used the physically-based hydrological/water quality model, Soil and Water Assessment Tool, to investigate the influence of PS and NPS pollution on the water quality of the East River (Dongjiang in Chinese) in southern China. Our results indicate that NPS pollution was the dominant contribution (>94%) to nutrient loads except for mineral phosphorus (50%). A comprehensive Water Quality Index (WQI) computed using eight key water quality variables demonstrates that water quality is better upstream than downstream despite the higher level of ammonium nitrogen found in upstream waters. Also, the temporal (seasonal) and spatial distributions of nutrient loads clearly indicate the critical time period (from late dry season to early wet season) and pollution source areas within the basin (middle and downstream agricultural lands), which resource managers can use to accomplish substantial reduction of NPS pollutant loadings. Overall, this study helps our understanding of the relationship between human activities and pollutant loads and further contributes to decision support for local watershed managers to protect water quality in this region. In particular, the methods presented such as integrating WQI with watershed modeling and identifying the critical time period and pollutions source areas can be valuable for other researchers worldwide.

  6. Predicting health care utilization in marginalized populations: Black, female, street-based sex workers.

    PubMed

    Varga, Leah M; Surratt, Hilary L

    2014-01-01

    Patterns of social and structural factors experienced by vulnerable populations may negatively affect willingness and ability to seek out health care services, and ultimately, their health. The outcome variable was utilization of health care services in the previous 12 months. Using Andersen's Behavioral Model for Vulnerable Populations, we examined self-reported data on utilization of health care services among a sample of 546 Black, street-based, female sex workers in Miami, Florida. To evaluate the impact of each domain of the model on predicting health care utilization, domains were included in the logistic regression analysis by blocks using the traditional variables first and then adding the vulnerable domain variables. The most consistent variables predicting health care utilization were having a regular source of care and self-rated health. The model that included only enabling variables was the most efficient model in predicting health care utilization. Any type of resource, link, or connection to or with an institution, or any consistent point of care, contributes significantly to health care utilization behaviors. A consistent and reliable source for health care may increase health care utilization and subsequently decrease health disparities among vulnerable and marginalized populations, as well as contribute to public health efforts that encourage preventive health. Copyright © 2014 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  7. Influence of the South-to-North Water Transfer and the Yangtze River Mitigation Projects on the water quality of Han River, China

    NASA Astrophysics Data System (ADS)

    Liu, W.; Kuo, Y. M.

    2016-12-01

    The Middle Route of China's South-to-North Water Transfer (MSNW) and Yangtze-Han River Water Diversion (YHWD) Projects have been operated since 2014, which may deteriorate water quality in Han River. The 11 water sampling sites distributed from the middle and down streams of Han River watershed were monitored monthly between July 2014 and December 2015. Factor analysis and cluster analysis were applied to investigate the major pollution types and main variables influencing water quality in Han River. The factor analysis distinguishes three main pollution types (agricultural nonpoint source, organic, and phosphorus point source pollution) affecting water quality of Han River. Cluster analysis classified all sampling sites into four groups and determined their pollution source for both Dry and Wet seasons. The sites located at central city receive point source pollution in both seasons. The water quality in downstream Han River (excluding central city sites) was influenced by nonpoint source pollution from Jianghan Plain. Variations of water qualities are associated with hydrological conditions varied from operations of engineering projects and seasonal variability especially in Dry season. Good water quality as Class III mainly occurred when flow rate is greater than 800 cms in Dry season. The low average flow rate below 583 cms will degrade water quality as Class V at almost all sites. Elevating the flow rate discharged from MSNW and YHWD Projects to Han River can avoid degrading water quality especially in low flow conditions and may decrease the probability of algal bloom occurrence in Han River. Increasing the flow rate from 400 cms to 700 cms in main Han River can obviously improve the water quality of Han River. The investigation of relationships between water quality and flow rate in both projects can provide management strategies of water quality for various flow conditions.

  8. Model Predictive Control techniques with application to photovoltaic, DC Microgrid, and a multi-sourced hybrid energy system

    NASA Astrophysics Data System (ADS)

    Shadmand, Mohammad Bagher

    Renewable energy sources continue to gain popularity. However, two major limitations exist that prevent widespread adoption: availability and variability of the electricity generated and the cost of the equipment. The focus of this dissertation is Model Predictive Control (MPC) for optimal sized photovoltaic (PV), DC Microgrid, and multi-sourced hybrid energy systems. The main considered applications are: maximum power point tracking (MPPT) by MPC, droop predictive control of DC microgrid, MPC of grid-interaction inverter, MPC of a capacitor-less VAR compensator based on matrix converter (MC). This dissertation firstly investigates a multi-objective optimization technique for a hybrid distribution system. The variability of a high-penetration PV scenario is also studied when incorporated into the microgrid concept. Emerging (PV) technologies have enabled the creation of contoured and conformal PV surfaces; the effect of using non-planar PV modules on variability is also analyzed. The proposed predictive control to achieve maximum power point for isolated and grid-tied PV systems speeds up the control loop since it predicts error before the switching signal is applied to the converter. The low conversion efficiency of PV cells means we want to ensure always operating at maximum possible power point to make the system economical. Thus the proposed MPPT technique can capture more energy compared to the conventional MPPT techniques from same amount of installed solar panel. Because of the MPPT requirement, the output voltage of the converter may vary. Therefore a droop control is needed to feed multiple arrays of photovoltaic systems to a DC bus in microgrid community. Development of a droop control technique by means of predictive control is another application of this dissertation. Reactive power, denoted as Volt Ampere Reactive (VAR), has several undesirable consequences on AC power system network such as reduction in power transfer capability and increase in transmission loss if not controlled appropriately. Inductive loads which operate with lagging power factor consume VARs, thus load compensation techniques by capacitor bank employment locally supply VARs needed by the load. Capacitors are highly unreliable components due to their failure modes and aging inherent. Approximately 60% of power electronic devices failure such as voltage-source inverter based static synchronous compensator (STATCOM) is due to the use of aluminum electrolytic DC capacitors. Therefore, a capacitor-less VAR compensation is desired. This dissertation also investigates a STATCOM capacitor-less reactive power compensation that uses only inductors combined with predictive controlled matrix converter.

  9. Coronae on stars

    NASA Technical Reports Server (NTRS)

    Haisch, B. M.

    1986-01-01

    Three lines of evidence are noted to point to a flare heating source for stellar coronae: a strong correlation between time-averaged flare energy release and coronal X-ray luminosity, the high temperature flare-like component of the spectral signature of coronal X-ray emission, and the observed short time scale variability that indicates continuous flare activity. It is presently suggested that flares may represent only the extreme high energy tail of a continuous distribution of coronal energy release events.

  10. Information entropy to measure the spatial and temporal complexity of solute transport in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Li, Weiyao; Huang, Guanhua; Xiong, Yunwu

    2016-04-01

    The complexity of the spatial structure of porous media, randomness of groundwater recharge and discharge (rainfall, runoff, etc.) has led to groundwater movement complexity, physical and chemical interaction between groundwater and porous media cause solute transport in the medium more complicated. An appropriate method to describe the complexity of features is essential when study on solute transport and conversion in porous media. Information entropy could measure uncertainty and disorder, therefore we attempted to investigate complexity, explore the contact between the information entropy and complexity of solute transport in heterogeneous porous media using information entropy theory. Based on Markov theory, two-dimensional stochastic field of hydraulic conductivity (K) was generated by transition probability. Flow and solute transport model were established under four conditions (instantaneous point source, continuous point source, instantaneous line source and continuous line source). The spatial and temporal complexity of solute transport process was characterized and evaluated using spatial moment and information entropy. Results indicated that the entropy increased as the increase of complexity of solute transport process. For the point source, the one-dimensional entropy of solute concentration increased at first and then decreased along X and Y directions. As time increased, entropy peak value basically unchanged, peak position migrated along the flow direction (X direction) and approximately coincided with the centroid position. With the increase of time, spatial variability and complexity of solute concentration increase, which result in the increases of the second-order spatial moment and the two-dimensional entropy. Information entropy of line source was higher than point source. Solute entropy obtained from continuous input was higher than instantaneous input. Due to the increase of average length of lithoface, media continuity increased, flow and solute transport complexity weakened, and the corresponding information entropy also decreased. Longitudinal macro dispersivity declined slightly at early time then rose. Solute spatial and temporal distribution had significant impacts on the information entropy. Information entropy could reflect the change of solute distribution. Information entropy appears a tool to characterize the spatial and temporal complexity of solute migration and provides a reference for future research.

  11. On the Profitability of Variable Speed Pump-Storage-Power in Frequency Restoration Reserve

    NASA Astrophysics Data System (ADS)

    Filipe, Jorge; Bessa, Ricardo; Moreira, Carlos; Silva, Bernardo

    2017-04-01

    The increase penetration of renewable energy sources (RES) into the European power system has introduced a significant amount of variability and uncertainty in the generation profiles raising the needs for ancillary services as well as other tools like demand response, improved generation forecasting techniques and changes to the market design. While RES is able to replace energy produced by the traditional centralized generation, it cannot displace its capacity in terms of ancillary services provided. Therefore, centralized generation capacity must be retained to perform this function leading to over-capacity issues and underutilisation of the assets. Large-scale reversible hydro power plants represent the majority of the storage solution installed in the power system. This technology comes with high investments costs, hence the constant search for methods to increase and diversify the sources of revenue. Traditional fixed speed pump storage units typically operate in the day-ahead market to perform price arbitrage and, in some specific cases, provide downward replacement reserve (RR). Variable speed pump storage can not only participate in RR but also contribute to FRR, given their ability to control its operating point in pumping mode. This work does an extended analysis of a complete bidding strategy for Pumped Storage Power, enhancing the economic advantages of variable speed pump units in comparison with fixed ones.

  12. Boosted Regression Tree Models to Explain Watershed ...

    EPA Pesticide Factsheets

    Boosted regression tree (BRT) models were developed to quantify the nonlinear relationships between landscape variables and nutrient concentrations in a mesoscale mixed land cover watershed during base-flow conditions. Factors that affect instream biological components, based on the Index of Biotic Integrity (IBI), were also analyzed. Seasonal BRT models at two spatial scales (watershed and riparian buffered area [RBA]) for nitrite-nitrate (NO2-NO3), total Kjeldahl nitrogen, and total phosphorus (TP) and annual models for the IBI score were developed. Two primary factors — location within the watershed (i.e., geographic position, stream order, and distance to a downstream confluence) and percentage of urban land cover (both scales) — emerged as important predictor variables. Latitude and longitude interacted with other factors to explain the variability in summer NO2-NO3 concentrations and IBI scores. BRT results also suggested that location might be associated with indicators of sources (e.g., land cover), runoff potential (e.g., soil and topographic factors), and processes not easily represented by spatial data indicators. Runoff indicators (e.g., Hydrological Soil Group D and Topographic Wetness Indices) explained a substantial portion of the variability in nutrient concentrations as did point sources for TP in the summer months. The results from our BRT approach can help prioritize areas for nutrient management in mixed-use and heavily impacted watershed

  13. NO x emissions from large point sources: variability in ozone production, resulting health damages and economic costs

    NASA Astrophysics Data System (ADS)

    Mauzerall, Denise L.; Sultan, Babar; Kim, Namsoug; Bradford, David F.

    We present a proof-of-concept analysis of the measurement of the health damage of ozone (O 3) produced from nitrogen oxides (NO=NO+NO) emitted by individual large point sources in the eastern United States. We use a regional atmospheric model of the eastern United States, the Comprehensive Air quality Model with Extensions (CAMx), to quantify the variable impact that a fixed quantity of NO x emitted from individual sources can have on the downwind concentration of surface O 3, depending on temperature and local biogenic hydrocarbon emissions. We also examine the dependence of resulting O 3-related health damages on the size of the exposed population. The investigation is relevant to the increasingly widely used "cap and trade" approach to NO x regulation, which presumes that shifts of emissions over time and space, holding the total fixed over the course of the summer O 3 season, will have minimal effect on the environmental outcome. By contrast, we show that a shift of a unit of NO x emissions from one place or time to another could result in large changes in resulting health effects due to O 3 formation and exposure. We indicate how the type of modeling carried out here might be used to attach externality-correcting prices to emissions. Charging emitters fees that are commensurate with the damage caused by their NO x emissions would create an incentive for emitters to reduce emissions at times and in locations where they cause the largest damage.

  14. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  15. Resilience, rapid transitions and regime shifts: fingerprinting the responses of Lake Żabińskie (NE Poland) to climate variability and human disturbance since 1000 AD

    NASA Astrophysics Data System (ADS)

    Tylmann, Wojciech; Hernández-Almeida, Iván; Grosjean, Martin; José Gómez Navarro, Juan; Larocque-Tobler, Isabelle; Bonk, Alicja; Enters, Dirk; Ustrzycka, Alicja; Piotrowska, Natalia; Przybylak, Rajmund; Wacnik, Agnieszka; Witak, Małgorzata

    2016-04-01

    Rapid ecosystem transitions and adverse effects on ecosystem services as responses to combined climate and human impacts are of major concern. Yet few quantitative observational data exist, particularly for ecosystems that have a long history of human intervention. Here, we combine quantitative summer and winter climate reconstructions, climate model simulations and proxies for three major environmental pressures (land use, nutrients and erosion) to explore the system dynamics, resilience, and the role of disturbance regimes in varved eutrophic Lake Żabińskie since AD 1000. Comparison between regional and global climate simulations and quantitative climate reconstructions indicate that proxy data capture noticeably natural forced climate variability, while internal variability appears as the dominant source of climate variability in the climate model simulations during most parts of the last millennium. Using different multivariate analyses and change point detection techniques, we identify ecosystem changes through time and shifts between rather stable states and highly variable ones, as expressed by the proxies for land-use, erosion and productivity in the lake. Prior to AD 1600, the lake ecosystem was characterized by a high stability and resilience against considerable observed natural climate variability. In contrast, lake-ecosystem conditions started to fluctuate at high frequency across a broad range of states after AD 1600. The period AD 1748-1868 represents the phase with the strongest human disturbance of the ecosystem. Analyses of the frequency of change points in the multi-proxy dataset suggests that the last 400 years were highly variable and flickering with increasing vulnerability of the ecosystem to the combined effects of climate variability and anthropogenic disturbances. This led to significant rapid ecosystem transformations.

  16. Expected scientific performance of the three spectrometers on the extreme ultraviolet explorer

    NASA Technical Reports Server (NTRS)

    Vallerga, J. V.; Jelinsky, P.; Vedder, P. W.; Malina, R. F.

    1990-01-01

    The expected in-orbit performance of the three spectrometers included on the Extreme Ultraviolet Explorer astronomical satellite is presented. Recent calibrations of the gratings, mirrors and detectors using monochromatic and continuum EUV light sources allow the calculation of the spectral resolution and throughput of the instrument. An effective area range of 0.2 to 2.8 sq cm is achieved over the wavelength range 70-600 A with a peak spectral resolution (FWHM) of 360 assuming a spacecraft pointing knowledge of 10 arc seconds (FWHM). For a 40,000 sec observation, the average 3 sigma sensitivity to a monochromatic line source is 0.003 photons/sq cm s. Simulated observations of known classes of EUV sources, such as hot white dwarfs, and cataclysmic variables are also presented.

  17. [Airports and air quality: a critical synthesis of the literature].

    PubMed

    Cattani, Giorgio; Di Menno di Bucchianico, Alessandro; Gaeta, Alessandra; Romani, Daniela; Fontana, Luca; Iavicoli, Ivo

    2014-01-01

    This work reviewed existing literature on airport related activities that could worsen surrounding air quality; its aim is to underline the progress coming from recent-year studies, the knowledge emerging from new approaches, the development of semi-empiric analytical methods as well as the questions still needing to be clarified. To estimate pollution levels, spatial and temporal variability, and the sources relative contributions integrated assessment, using both fixed point measurement and model outputs, are needed. The general picture emerging from the studies was a non-negligible and highly spatially variable (within 2-3 km from the fence line) airport contribution; even if it is often not dominant compared to other concomitant pollution sources. Results were highly airport-specific. Traffic volumes, landscape and meteorology were the key variables that drove the impacts. Results were thus hardly exportable to other contexts. Airport related pollutant sources were found to be characterized by unusual emission patterns (particularly ultrafine particles, black carbon and nitrogen oxides during take-off); high time-resolution measurements allow to depict the rapidly changing take-off effect on air quality that could not be adequately observed otherwise. Few studies used high time resolution data in a successful way as statistical models inputs to estimate the aircraft take-off contribution to the observed average levels. These findings should not be neglected when exposure of people living near airports is to be assessed.

  18. Runoff characteristics and non-point source pollution analysis in the Taihu Lake Basin: a case study of the town of Xueyan, China.

    PubMed

    Zhu, Q D; Sun, J H; Hua, G F; Wang, J H; Wang, H

    2015-10-01

    Non-point source pollution is a significant environmental issue in small watersheds in China. To study the effects of rainfall on pollutants transported by runoff, rainfall was monitored in Xueyan town in the Taihu Lake Basin (TLB) for over 12 consecutive months. The concentrations of different forms of nitrogen (N) and phosphorus (P), and chemical oxygen demand, were monitored in runoff and river water across different land use types. The results indicated that pollutant loads were highly variable. Most N losses due to runoff were found around industrial areas (printing factories), while residential areas exhibited the lowest nitrogen losses through runoff. Nitrate nitrogen (NO3-N) and ammonia nitrogen (NH4-N) were the dominant forms of soluble N around printing factories and hotels, respectively. The levels of N in river water were stable prior to the generation of runoff from a rainfall event, after which they were positively correlated to rainfall intensity. In addition, three sites with different areas were selected for a case study to analyze trends in pollutant levels during two rainfall events, using the AnnAGNPS model. The modeled results generally agreed with the observed data, which suggests that AnnAGNPS can be used successfully for modeling runoff nutrient loading in this region. The conclusions of this study provide important information on controlling non-point source pollution in TLB.

  19. The SWIFT AGN and Cluster Survey. I. Number Counts of AGNs and Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Dai, Xinyu; Griffin, Rhiannon D.; Kochanek, Christopher S.; Nugent, Jenna M.; Bregman, Joel N.

    2015-05-01

    The Swift active galactic nucleus (AGN) and Cluster Survey (SACS) uses 125 deg2 of Swift X-ray Telescope serendipitous fields with variable depths surrounding γ-ray bursts to provide a medium depth (4× {{10}-15} erg cm-2 s-1) and area survey filling the gap between deep, narrow Chandra/XMM-Newton surveys and wide, shallow ROSAT surveys. Here, we present a catalog of 22,563 point sources and 442 extended sources and examine the number counts of the AGN and galaxy cluster populations. SACS provides excellent constraints on the AGN number counts at the bright end with negligible uncertainties due to cosmic variance, and these constraints are consistent with previous measurements. We use Wide-field Infrared Survey Explorer mid-infrared (MIR) colors to classify the sources. For AGNs we can roughly separate the point sources into MIR-red and MIR-blue AGNs, finding roughly equal numbers of each type in the soft X-ray band (0.5-2 keV), but fewer MIR-blue sources in the hard X-ray band (2-8 keV). The cluster number counts, with 5% uncertainties from cosmic variance, are also consistent with previous surveys but span a much larger continuous flux range. Deep optical or IR follow-up observations of this cluster sample will significantly increase the number of higher-redshift (z\\gt 0.5) X-ray-selected clusters.

  20. Exploring X-Ray Binary Populations in Compact Group Galaxies With Chandra

    NASA Technical Reports Server (NTRS)

    Tzanavaris, P.; Hornschemeier, A. E..; Gallagher, S. C.; Lenkic, L.; Desjardins, T. D.; Walker, L. M.; Johnson, K. E.; Mulchaey, J. S.

    2016-01-01

    We obtain total galaxy X-ray luminosities, LX, originating from individually detected point sources in a sample of 47 galaxies in 15 compact groups of galaxies (CGs). For the great majority of our galaxies, we find that the detected point sources most likely are local to their associated galaxy, and are thus extragalactic X-ray binaries (XRBs) or nuclear active galactic nuclei (AGNs). For spiral and irregular galaxies, we find that, after accounting for AGNs and nuclear sources, most CG galaxies are either within the +/-1s scatter of the Mineo et al. LX-star formation rate (SFR) correlation or have higher LX than predicted by this correlation for their SFR. We discuss how these "excesses" may be due to low metallicities and high interaction levels. For elliptical and S0 galaxies, after accounting for AGNs and nuclear sources, most CG galaxies are consistent with the Boroson et al. LX-stellar mass correlation for low-mass XRBs, with larger scatter, likely due to residual effects such as AGN activity or hot gas. Assuming non-nuclear sources are low- or high-mass XRBs, we use appropriate XRB luminosity functions to estimate the probability that stochastic effects can lead to such extreme LX values. We find that, although stochastic effects do not in general appear to be important, for some galaxies there is a significant probability that high LX values can be observed due to strong XRB variability.

  1. Multivariate Probabilistic Analysis of an Hydrological Model

    NASA Astrophysics Data System (ADS)

    Franceschini, Samuela; Marani, Marco

    2010-05-01

    Model predictions derived based on rainfall measurements and hydrological model results are often limited by the systematic error of measuring instruments, by the intrinsic variability of the natural processes and by the uncertainty of the mathematical representation. We propose a means to identify such sources of uncertainty and to quantify their effects based on point-estimate approaches, as a valid alternative to cumbersome Montecarlo methods. We present uncertainty analyses on the hydrologic response to selected meteorological events, in the mountain streamflow-generating portion of the Brenta basin at Bassano del Grappa, Italy. The Brenta river catchment has a relatively uniform morphology and quite a heterogeneous rainfall-pattern. In the present work, we evaluate two sources of uncertainty: data uncertainty (the uncertainty due to data handling and analysis) and model uncertainty (the uncertainty related to the formulation of the model). We thus evaluate the effects of the measurement error of tipping-bucket rain gauges, the uncertainty in estimating spatially-distributed rainfall through block kriging, and the uncertainty associated with estimated model parameters. To this end, we coupled a deterministic model based on the geomorphological theory of the hydrologic response to probabilistic methods. In particular we compare the results of Monte Carlo Simulations (MCS) to the results obtained, in the same conditions, using Li's Point Estimate Method (LiM). The LiM is a probabilistic technique that approximates the continuous probability distribution function of the considered stochastic variables by means of discrete points and associated weights. This allows to satisfactorily reproduce results with only few evaluations of the model function. The comparison between the LiM and MCS results highlights the pros and cons of using an approximating method. LiM is less computationally demanding than MCS, but has limited applicability especially when the model response is highly nonlinear. Higher-order approximations can provide more accurate estimations, but reduce the numerical advantage of the LiM. The results of the uncertainty analysis identify the main sources of uncertainty in the computation of river discharge. In this particular case the spatial variability of rainfall and the model parameters uncertainty are shown to have the greatest impact on discharge evaluation. This, in turn, highlights the need to support any estimated hydrological response with probability information and risk analysis results in order to provide a robust, systematic framework for decision making.

  2. Analysis of the environmental behavior of farmers for non-point source pollution control and management in a water source protection area in China.

    PubMed

    Wang, Yandong; Yang, Jun; Liang, Jiping; Qiang, Yanfang; Fang, Shanqi; Gao, Minxue; Fan, Xiaoyu; Yang, Gaihe; Zhang, Baowen; Feng, Yongzhong

    2018-08-15

    The environmental behavior of farmers plays an important role in exploring the causes of non-point source pollution and taking scientific control and management measures. Based on the theory of planned behavior (TPB), the present study investigated the environmental behavior of farmers in the Water Source Area of the Middle Route of the South-to-North Water Diversion Project in China. Results showed that TPB could explain farmers' environmental behavior (SMC=0.26) and intention (SMC=0.36) well. Furthermore, the farmers' attitude towards behavior (AB), subjective norm (SN), and perceived behavioral control (PBC) positively and significantly influenced their environmental intention; their environmental intention further impacted their behavior. SN was proved to be the main key factor indirectly influencing the farmers' environmental behavior, while PBC had no significant and direct effect. Moreover, environmental knowledge following as a moderator, gender and age was used as control variables to conduct the environmental knowledge on TPB construct moderated mediation analysis. It demonstrated that gender had a significant controlling effect on environmental behavior; that is, males engage in more environmentally friendly behaviors. However, age showed a significant negative controlling effect on pro-environmental intention and an opposite effect on pro-environmental behavior. In addition, environmental knowledge could negatively moderate the relationship between PBC and environmental intention. PBC had a greater impact on the environmental intention of farmers with poor environmental knowledge, compared to those with plenty environmental knowledge. Altogether, the present study could provide a theoretical basis for non-point source pollution control and management. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Water quality assessment and apportionment of pollution sources using APCS-MLR and PMF receptor modeling techniques in three major rivers of South Florida.

    PubMed

    Haji Gholizadeh, Mohammad; Melesse, Assefa M; Reddi, Lakshmi

    2016-10-01

    In this study, principal component analysis (PCA), factor analysis (FA), and the absolute principal component score-multiple linear regression (APCS-MLR) receptor modeling technique were used to assess the water quality and identify and quantify the potential pollution sources affecting the water quality of three major rivers of South Florida. For this purpose, 15years (2000-2014) dataset of 12 water quality variables covering 16 monitoring stations, and approximately 35,000 observations was used. The PCA/FA method identified five and four potential pollution sources in wet and dry seasons, respectively, and the effective mechanisms, rules and causes were explained. The APCS-MLR apportioned their contributions to each water quality variable. Results showed that the point source pollution discharges from anthropogenic factors due to the discharge of agriculture waste and domestic and industrial wastewater were the major sources of river water contamination. Also, the studied variables were categorized into three groups of nutrients (total kjeldahl nitrogen, total phosphorus, total phosphate, and ammonia-N), water murkiness conducive parameters (total suspended solids, turbidity, and chlorophyll-a), and salt ions (magnesium, chloride, and sodium), and average contributions of different potential pollution sources to these categories were considered separately. The data matrix was also subjected to PMF receptor model using the EPA PMF-5.0 program and the two-way model described was performed for the PMF analyses. Comparison of the obtained results of PMF and APCS-MLR models showed that there were some significant differences in estimated contribution for each potential pollution source, especially in the wet season. Eventually, it was concluded that the APCS-MLR receptor modeling approach appears to be more physically plausible for the current study. It is believed that the results of apportionment could be very useful to the local authorities for the control and management of pollution and better protection of important riverine water quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. The Neural Underpinnings of Cognitive Flexibility and their Disruption in Psychotic Illness

    PubMed Central

    Waltz, James A.

    2016-01-01

    Schizophrenia has long been associated with a variety of cognitive deficits, including reduced cognitive flexibility. More recent findings, however, point to tremendous inter-individual variability among patients on measures of cognitive flexibility/set-shifting. With an eye toward shedding light on potential sources of variability in set-shifting abilities among schizophrenia patients, I examine the neural substrates of underlying probabilistic reversal learning (PRL) – a paradigmatic measure of cognitive flexibility – as well as neuromodulatory influences upon these systems. Finally, I report on behavioral and neuroimaging studies of PRL in schizophrenia patients, discussing the potentially influences of illness profile and antipsychotic medications on cognitive flexibility in schizophrenia. PMID:27282085

  5. Model-free data analysis for source separation based on Non-Negative Matrix Factorization and k-means clustering (NMFk)

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Alexandrov, B.

    2014-12-01

    The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the location of the water-supply pumping wells based on the available data. The possible applications of the NMFk algorithm are not limited to hydrology problems; NMFk can be applied to any problem where temporal system behavior is observed at multiple locations and an unknown number of physical sources are causing these fluctuations.

  6. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  7. nSTAT: Open-Source Neural Spike Train Analysis Toolbox for Matlab

    PubMed Central

    Cajigas, I.; Malik, W.Q.; Brown, E.N.

    2012-01-01

    Over the last decade there has been a tremendous advance in the analytical tools available to neuroscientists to understand and model neural function. In particular, the point process - Generalized Linear Model (PPGLM) framework has been applied successfully to problems ranging from neuro-endocrine physiology to neural decoding. However, the lack of freely distributed software implementations of published PP-GLM algorithms together with problem-specific modifications required for their use, limit wide application of these techniques. In an effort to make existing PP-GLM methods more accessible to the neuroscience community, we have developed nSTAT – an open source neural spike train analysis toolbox for Matlab®. By adopting an Object-Oriented Programming (OOP) approach, nSTAT allows users to easily manipulate data by performing operations on objects that have an intuitive connection to the experiment (spike trains, covariates, etc.), rather than by dealing with data in vector/matrix form. The algorithms implemented within nSTAT address a number of common problems including computation of peri-stimulus time histograms, quantification of the temporal response properties of neurons, and characterization of neural plasticity within and across trials. nSTAT provides a starting point for exploratory data analysis, allows for simple and systematic building and testing of point process models, and for decoding of stimulus variables based on point process models of neural function. By providing an open-source toolbox, we hope to establish a platform that can be easily used, modified, and extended by the scientific community to address limitations of current techniques and to extend available techniques to more complex problems. PMID:22981419

  8. Sources of Wind Variability at a Single Station in Complex Terrain During Tropical Cyclone Passage

    DTIC Science & Technology

    2013-12-01

    Mesoscale Prediction System CPA Closest point of approach ET Extratropical transition FNMOC Fleet Numerical Meteorology and Oceanography Center...forecasts. However, 2 the TC forecast tracks and warnings they issue necessarily focus on the large-scale structure of the storm , and are not...winds at one station. Also, this technique is a storm - centered forecast and even if the grid spacing is on order of one kilometer, it is unlikely

  9. X-Ray Scattering Echoes and Ghost Halos from the Intergalactic Medium: Relation to the Nature of AGN Variability

    NASA Astrophysics Data System (ADS)

    Corrales, Lia

    2015-05-01

    X-ray bright quasars might be used to trace dust in the circumgalactic and intergalactic medium through the phenomenon of X-ray scattering, which is observed around Galactic objects whose light passes through a sufficient column of interstellar gas and dust. Of particular interest is the abundance of gray dust larger than 0.1 μ m, which is difficult to detect at other wavelengths. To calculate X-ray scattering from large grains, one must abandon the traditional Rayleigh-Gans approximation. The Mie solution for the X-ray scattering optical depth of the universe is ∼ 1%. This presents a great difficulty for distinguishing dust scattered photons from the point source image of Chandra, which is currently unsurpassed in imaging resolution. The variable nature of AGNs offers a solution to this problem, as scattered light takes a longer path and thus experiences a time delay with respect to non-scattered light. If an AGN dims significantly (≳ 3 dex) due to a major feedback event, the Chandra point source image will be suppressed relative to the scattering halo, and an X-ray echo or ghost halo may become visible. I estimate the total number of scattering echoes visible by Chandra over the entire sky: {{N}ech}∼ {{10}3}({{ν }fb}/y{{r}-1}), where {{ν }fb} is the characteristic frequency of feedback events capable of dimming an AGN quickly.

  10. Temporal variability patterns in solar radiation estimations

    NASA Astrophysics Data System (ADS)

    Vindel, José M.; Navarro, Ana A.; Valenzuela, Rita X.; Zarzalejo, Luis F.

    2016-06-01

    In this work, solar radiation estimations obtained from a satellite and a numerical weather prediction model in mainland Spain have been compared. Similar comparisons have been formerly carried out, but in this case, the methodology used is different: the temporal variability of both sources of estimation has been compared with the annual evolution of the radiation associated to the different study climate zones. The methodology is based on obtaining behavior patterns, using a Principal Component Analysis, following the annual evolution of solar radiation estimations. Indeed, the adjustment degree to these patterns in each point (assessed from maps of correlation) may be associated with the annual radiation variation (assessed from the interquartile range), which is associated, in turn, to different climate zones. In addition, the goodness of each estimation source has been assessed comparing it with data obtained from the radiation measurements in ground by pyranometers. For the study, radiation data from Satellite Application Facilities and data corresponding to the reanalysis carried out by the European Centre for Medium-Range Weather Forecasts have been used.

  11. Development of a neural-based forecasting tool to classify recreational water quality using fecal indicator organisms.

    PubMed

    Motamarri, Srinivas; Boccelli, Dominic L

    2012-09-15

    Users of recreational waters may be exposed to elevated pathogen levels through various point/non-point sources. Typical daily notifications rely on microbial analysis of indicator organisms (e.g., Escherichia coli) that require 18, or more, hours to provide an adequate response. Modeling approaches, such as multivariate linear regression (MLR) and artificial neural networks (ANN), have been utilized to provide quick predictions of microbial concentrations for classification purposes, but generally suffer from high false negative rates. This study introduces the use of learning vector quantization (LVQ)--a direct classification approach--for comparison with MLR and ANN approaches and integrates input selection for model development with respect to primary and secondary water quality standards within the Charles River Basin (Massachusetts, USA) using meteorologic, hydrologic, and microbial explanatory variables. Integrating input selection into model development showed that discharge variables were the most important explanatory variables while antecedent rainfall and time since previous events were also important. With respect to classification, all three models adequately represented the non-violated samples (>90%). The MLR approach had the highest false negative rates associated with classifying violated samples (41-62% vs 13-43% (ANN) and <16% (LVQ)) when using five or more explanatory variables. The ANN performance was more similar to LVQ when a larger number of explanatory variables were utilized, but the ANN performance degraded toward MLR performance as explanatory variables were removed. Overall, the use of LVQ as a direct classifier provided the best overall classification ability with respect to violated/non-violated samples for both standards. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Measuring temporal stability of positron emission tomography standardized uptake value bias using long-lived sources in a multicenter network.

    PubMed

    Byrd, Darrin; Christopfel, Rebecca; Arabasz, Grae; Catana, Ciprian; Karp, Joel; Lodge, Martin A; Laymon, Charles; Moros, Eduardo G; Budzevich, Mikalai; Nehmeh, Sadek; Scheuermann, Joshua; Sunderland, John; Zhang, Jun; Kinahan, Paul

    2018-01-01

    Positron emission tomography (PET) is a quantitative imaging modality, but the computation of standardized uptake values (SUVs) requires several instruments to be correctly calibrated. Variability in the calibration process may lead to unreliable quantitation. Sealed source kits containing traceable amounts of [Formula: see text] were used to measure signal stability for 19 PET scanners at nine hospitals in the National Cancer Institute's Quantitative Imaging Network. Repeated measurements of the sources were performed on PET scanners and in dose calibrators. The measured scanner and dose calibrator signal biases were used to compute the bias in SUVs at multiple time points for each site over a 14-month period. Estimation of absolute SUV accuracy was confounded by bias from the solid phantoms' physical properties. On average, the intrascanner coefficient of variation for SUV measurements was 3.5%. Over the entire length of the study, single-scanner SUV values varied over a range of 11%. Dose calibrator bias was not correlated with scanner bias. Calibration factors from the image metadata were nearly as variable as scanner signal, and were correlated with signal for many scanners. SUVs often showed low intrascanner variability between successive measurements but were also prone to shifts in apparent bias, possibly in part due to scanner recalibrations that are part of regular scanner quality control. Biases of key factors in the computation of SUVs were not correlated and their temporal variations did not cancel out of the computation. Long-lived sources and image metadata may provide a check on the recalibration process.

  13. The Nustar Spectrum of Mrk 335: Extreme Relativistic Effects Within Two Gravitational Radii of the Event Horizon?

    NASA Technical Reports Server (NTRS)

    Parker, M. L.; Wilkins, D. R.; Fabian, A. C.; Grupe, D.; Dauser, T.; Matt, G.; Harrison, F. A.; Brenneman, L.; Boggs, S. E.; Christensen, F. E.; hide

    2014-01-01

    We present 3-50 keV NuSTAR observations of the active galactic nuclei Mrk 335 in a very low flux state. The spectrum is dominated by very strong features at the energies of the iron line at 5-7 keV and Compton hump from 10-30 keV. The source is variable during the observation, with the variability concentrated at low energies, which suggesting either a relativistic reflection or a variable absorption scenario. In this work, we focus on the reflection interpretation, making use of new relativistic reflection models that self consistently calculate the reflection fraction, relativistic blurring and angle-dependent reflection spectrum for different coronal heights to model the spectra. We find that the spectra can be well fitted with relativistic reflection, and that the lowest flux state spectrum is described by reflection alone, suggesting the effects of extreme light-bending occurring within approx. 2 gravitational radii (RG) of the event horizon. The reflection fraction decreases sharply with increasing flux, consistent with a point source moving up to above 10 RG as the source brightens. We constrain the spin parameter to greater than 0.9 at the 3(sigma) confidence level. By adding a spin-dependent upper limit on the reflection fraction to our models, we demonstrate that this can be a powerful way of constraining the spin parameter, particularly in reflection dominated states. We also calculate a detailed emissivity profile for the iron line, and find that it closely matches theoretical predictions for a compact source within a few RG of the black hole.

  14. Trends in nutrient concentrations, loads, and yields in streams in the Sacramento, San Joaquin, and Santa Ana Basins, California, 1975-2004

    USGS Publications Warehouse

    Kratzer, Charles R.; Kent, Robert; Seleh, Dina K.; Knifong, Donna L.; Dileanis, Peter D.; Orlando, James L.

    2011-01-01

    A comprehensive database was assembled for the Sacramento, San Joaquin, and Santa Ana Basins in California on nutrient concentrations, flows, and point and nonpoint sources of nutrients for 1975-2004. Most of the data on nutrient concentrations (nitrate, ammonia, total nitrogen, orthophosphate, and total phosphorus) were from the U.S. Geological Survey's National Water Information System database (35.2 percent), the California Department of Water Resources (21.9 percent), the University of California at Davis (21.6 percent), and the U.S. Environmental Protection Agency's STOrage and RETrieval database (20.0 percent). Point-source discharges accounted for less than 1 percent of river flows in the Sacramento and San Joaquin Rivers, but accounted for close to 80 percent of the nonstorm flow in the Santa Ana River. Point sources accounted for 4 and 7 percent of the total nitrogen and total phosphorus loads, respectively, in the Sacramento River at Freeport for 1985-2004. Point sources accounted for 8 and 17 percent of the total nitrogen and total phosphorus loads, respectively, in the San Joaquin River near Vernalis for 1985-2004. The volume of wastewater discharged into the Santa Ana River increased almost three-fold over the study period. However, due to improvements in wastewater treatment, the total nitrogen load to the Santa Ana River from point sources in 2004 was approximately the same as in 1975 and the total phosphorus load in 2004 was less than in 1975. Nonpoint sources of nutrients estimated in this study included atmospheric deposition, fertilizer application, manure production, and tile drainage. The estimated dry deposition of nitrogen exceeded wet deposition in the Sacramento and San Joaquin Valleys and in the basin area of the Santa Ana Basin, with ratios of dry to wet deposition of 1.7, 2.8, and 9.8, respectively. Fertilizer application increased appreciably from 1987 to 2004 in all three California basins, although manure production increased in the San Joaquin Basin but decreased in the Sacramento and Santa Ana Basins from 1982 to 2002. Tile drainage accounted for 22 percent of the total nitrogen load in the San Joaquin River near Vernalis for 1985-2004. Nutrient loads and trends were calculated by using the log-linear multiple-regression model, LOADEST. Loads were calculated for water years 1975-2004 for 22 sites in the Sacramento Basin, 15 sites in the San Joaquin Basin, and 6 sites in the Santa Ana Basin. The average annual load of total nitrogen and total phosphorus for 1985-2004 in subbasins in the Sacramento and San Joaquin Basins were divided by their drainage areas to calculate average annual yield. Total nitrogen yields were greater than 2.45 tons per square mile per year [(tons/mi2)/yr] in about 61 percent of the valley floor in the San Joaquin Basin compared with only about 12 percent of the valley floor in the Sacramento Basin. Total phosphorus yields were greater than 0.34 (tons/mi2)/yr in about 43 percent of the valley floor in the San Joaquin Basin compared with only about 5 percent in the valley floor of the Sacramento Basin. In a stepwise multiple linear-regression analysis of 30 subbasins in the Sacramento and San Joaquin Basins, the most important explanatory variables (out of 11 variables) for the response variable (total nitrogen yield) were the percentage of land use in (1) orchards and vineyards, (2) row crops, and (3) urban categories. For total phosphorus yield, the most important explanatory variable was the amount of fertilizer application plus manure production. Trends were evaluated for three time periods: 1975-2004, 1985-2004, and 1993-2004. Most trends in flow-adjusted concentrations of nutrients in the Sacramento Basin were downward for all three time periods. The decreasing nutrient trends in the American River at Sacramento and the Sacramento River at Freeport for 1975-2004 were attributed to the consolidation of wastewater in the Sacramento metropolitan area in December 1982 to

  15. Spatial variability of primary organic sources regulates ichthyofauna distribution despite seasonal influence in Terminos lagoon and continental shelf of Campeche, Mexico

    NASA Astrophysics Data System (ADS)

    Romo Rios, J. A.; Aguíñiga-García, S.; Sanchez, A.; Zetina-Rejón, M.; Arreguín-Sánchez, F.; Tripp-Valdéz, A.; Galeana-Cortazár, A.

    2013-05-01

    Human activities have strong impacts on coastal ecosystems functioning through their effect on primary organic sources distributions and resulting biodiversity. Hence, it appears to be of utmost importance to quantify contribution of primary producers to sediment organic matter (SOM) spatial variability and its associated ichthyofauna. The Terminos lagoon (Gulf of Mexico) is a tropical estuary severely impacted by human activities even though of primary concern for its biodiversity, its habitats, and its resource supply. Stable isotope data (d13C, d15N) from mangrove, seaweed, seagrass, phytoplankton, ichthyofauna and SOM were sampled in four zones of the lagoon and the continental shelf through windy (November to February), dry (March to June) and rainy (July to October) seasons. Stable Isotope Analysis in R (SIAR) mixing model were used to determine relative contributions of the autotrophic sources to the ichthyofauna and SOM. Analysis of variance of ichthyofauna isotopic values showed significant differences (P < 0.001) in the four zones of lagoon despite the variability introduced by the windy, dry and rainy seasons. In lagoons rivers discharge zone, the mangrove contribution to ichthyofauna was 40% and 84% to SOM. Alternative use of habitat by ichthyofauna was evidenced since in the deep area of the lagoon (4 m), the contribution of mangrove to fish is 50%, and meanwhile contribution to SOM is only 77%. Although phytoplankton (43%) and seaweed (41%) contributions to the adjacent continental shelf ichthyofauna were the main organic sources, there was 37% mangrove contribution to SOM, demonstrating conspicuous terrigenous influence from lagoon ecosystem. Our results point toward organic sources spatial variations that regulate fish distribution. In Terminos lagoon, significant correlation (p-value = 0.2141 and r=0.79) of Ariopsis felis and Sphoeroides testudineus abundances and seaweed and seagrasses contributions (30-35%) during both dry and rainy seasons, evidence that spatial variability organic sources could be central for the state of equilibrium of ecosystems. Keywords: sediment organic matter, mangrove, ecosystems, mixing model, trophic structure

  16. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  17. Extended 60 μm Emission from Nearby Mira Variables

    NASA Astrophysics Data System (ADS)

    Bauer, W. H.; Stencel, R. E.

    1993-01-01

    Circumstellar dust envelopes around some optically visible late-type stars are so extensive that they are detectable as extended at an arc-minute scale by the IRAS survey observations (Stencel, Pesce and Bauer 1988, Astron. J 95, 141; Hawkins 1990, Astron. Ap. 229, L8). The width of the IRAS scan profiles at 10% of peak intensity is an indicator of source extension. Wyatt and Cahn (1983, Ap. J. 275, 225) presented a sample of 124 Mira variables in the solar neighborhood. Of this sample, 11 Miras which show silicate emission are bright enough at 60 microns for a significant determination of the width of a scan at 10% of peak flux. Individual scans and maps were examined in order to determine whether any observed extension was associated with the central star. Five stars showed significant extension apparently due to mass loss from the central star: R Leo, o Cet, U Ori, R Cas and R Hor. IRAS LRS spectra, point source fluxes and observed extensions of these sources are compared to the predictions of model dust shells which assume steady mass loss. This work was supported in part by NASA grant NAG 5-1213 to Wellesley College.

  18. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  19. An X-ray Investigation of the NGC 346 Field in the SMC (2): The Field Population

    NASA Technical Reports Server (NTRS)

    Naze, Y.; Hartwell, J. M.; Stevens, I. R.; Manfroid, J.; Marchenko. S.; Corcoran, M. F.; Moffat, A. F. J.; Skalkowski, G.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We present results from a Chandra observation of the NGC 346 cluster, the ionizing source of N66, the most luminous H II region and the largest star formation region in the SMC. In the first part of this investigation, we have analysed the X-ray properties of the cluster itself and the remarkable star HD 5980. But the field contains additional objects of interest. In total, 79 X-ray point sources were detected in the Chandra observation and we investigate here their characteristics in details. The sources possess rather high HRs, and their cumulative luminosity function is steeper than the SMC's trend. Their absorption columns suggest that most of the sources belong to NGC 346. Using new UBVRI imaging with the ESO 2.2m telescope, we also discovered possible counterparts for 36 of these X-ray sources. Finally, some objects show X-ray and/or optical variability, and thus need further monitoring.

  20. Robust Variable Selection with Exponential Squared Loss.

    PubMed

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-04-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are [Formula: see text] and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods.

  1. Robust Variable Selection with Exponential Squared Loss

    PubMed Central

    Wang, Xueqin; Jiang, Yunlu; Huang, Mian; Zhang, Heping

    2013-01-01

    Robust variable selection procedures through penalized regression have been gaining increased attention in the literature. They can be used to perform variable selection and are expected to yield robust estimates. However, to the best of our knowledge, the robustness of those penalized regression procedures has not been well characterized. In this paper, we propose a class of penalized robust regression estimators based on exponential squared loss. The motivation for this new procedure is that it enables us to characterize its robustness that has not been done for the existing procedures, while its performance is near optimal and superior to some recently developed methods. Specifically, under defined regularity conditions, our estimators are n-consistent and possess the oracle property. Importantly, we show that our estimators can achieve the highest asymptotic breakdown point of 1/2 and that their influence functions are bounded with respect to the outliers in either the response or the covariate domain. We performed simulation studies to compare our proposed method with some recent methods, using the oracle method as the benchmark. We consider common sources of influential points. Our simulation studies reveal that our proposed method performs similarly to the oracle method in terms of the model error and the positive selection rate even in the presence of influential points. In contrast, other existing procedures have a much lower non-causal selection rate. Furthermore, we re-analyze the Boston Housing Price Dataset and the Plasma Beta-Carotene Level Dataset that are commonly used examples for regression diagnostics of influential points. Our analysis unravels the discrepancies of using our robust method versus the other penalized regression method, underscoring the importance of developing and applying robust penalized regression methods. PMID:23913996

  2. Cathodoluminescence microscopy and petrographic image analysis of aggregates in concrete pavements affected by alkali-silica reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stastna, A., E-mail: astastna@gmail.com; Sachlova, S.; Pertold, Z.

    2012-03-15

    Various microscopic techniques (cathodoluminescence, polarizing and electron microscopy) were combined with image analysis with the aim to determine a) the modal composition and degradation features within concrete, and b) the petrographic characteristics and the geological types (rocks, and their provenance) of the aggregates. Concrete samples were taken from five different portions of Highway Nos. D1, D11, and D5 (the Czech Republic). Coarse and fine aggregates were found to be primarily composed of volcanic, plutonic, metamorphic and sedimentary rocks, as well as of quartz and feldspar aggregates of variable origins. The alkali-silica reaction was observed to be the main degradation mechanism,more » based upon the presence of microcracks and alkali-silica gels in the concrete. Use of cathodoluminescence enabled the identification of the source materials of the quartz aggregates, based upon their CL characteristics (i.e., color, intensity, microfractures, deformation, and zoning), which is difficult to distinguish only employing polarizing and electron microscopy. - Highlights: Black-Right-Pointing-Pointer ASR in concrete pavements on the Highways Nos. D1, D5 and D11 (Czech Republic). Black-Right-Pointing-Pointer Cathodoluminescence was combined with various microscopic techniques and image analysis. Black-Right-Pointing-Pointer ASR was attributed to aggregates. Black-Right-Pointing-Pointer Source materials of aggregates were identified based on cathodoluminescence characteristics. Black-Right-Pointing-Pointer Quartz comes from different volcanic, plutonic and metamorphic parent rocks.« less

  3. Free Electron coherent sources: From microwave to X-rays

    NASA Astrophysics Data System (ADS)

    Dattoli, Giuseppe; Di Palma, Emanuele; Pagnutti, Simonetta; Sabia, Elio

    2018-04-01

    The term Free Electron Laser (FEL) will be used, in this paper, to indicate a wide collection of devices aimed at providing coherent electromagnetic radiation from a beam of "free" electrons, unbound at the atomic or molecular states. This article reviews the similarities that link different sources of coherent radiation across the electromagnetic spectrum from microwaves to X-rays, and compares the analogies with conventional laser sources. We explore developing a point of view that allows a unified analytical treatment of these devices, by the introduction of appropriate global variables (e.g. gain, saturation intensity, inhomogeneous broadening parameters, longitudinal mode coupling strength), yielding a very effective way for the determination of the relevant design parameters. The paper looks also at more speculative aspects of FEL physics, which may address the relevance of quantum effects in the lasing process.

  4. Probing the X-ray Emission from the Massive Star Cluster Westerlund 2

    NASA Astrophysics Data System (ADS)

    Lopez, Laura

    2017-09-01

    We propose a 300 ks Chandra ACIS-I observation of the massive star cluster Westerlund 2 (Wd2). This region is teeming with high-energy emission from a variety of sources: colliding wind binaries, OB and Wolf-Rayet stars, two young pulsars, and an unidentified source of very high-energy (VHE) gamma-rays. Our Chandra program is designed to achieve several goals: 1) to take a complete census of Wd2 X-ray point sources and monitor variability; 2) to probe the conditions of the colliding winds in the binary WR 20a; 3) to search for an X-ray counterpart of the VHE gamma-rays; 4) to identify diffuse X-ray emission; 5) to compare results to other massive star clusters observed by Chandra. Only Chandra has the spatial resolution and sensitivity necessary for our proposed analyses.

  5. Estimating discharge and non-point source nitrate loading to streams from three end-member pathways using high-frequency water quality and streamflow data

    NASA Astrophysics Data System (ADS)

    Miller, M. P.; Tesoriero, A. J.; Hood, K.; Terziotti, S.; Wolock, D.

    2017-12-01

    The myriad hydrologic and biogeochemical processes taking place in watersheds occurring across space and time are integrated and reflected in the quantity and quality of water in streams and rivers. Collection of high-frequency water quality data with sensors in surface waters provides new opportunities to disentangle these processes and quantify sources and transport of water and solutes in the coupled groundwater-surface water system. A new approach for separating the streamflow hydrograph into three components was developed and coupled with high-frequency specific conductance and nitrate data to estimate time-variable watershed-scale nitrate loading from three end-member pathways - dilute quickflow, concentrated quickflow, and slowflow groundwater - to two streams in central Wisconsin. Time-variable nitrate loads from the three pathways were estimated for periods of up to two years in a groundwater-dominated and a quickflow-dominated stream, using only streamflow and in-stream water quality data. The dilute and concentrated quickflow end-members were distinguished using high-frequency specific conductance data. Results indicate that dilute quickflow contributed less than 5% of the nitrate load at both sites, whereas 89±5% of the nitrate load at the groundwater-dominated stream was from slowflow groundwater, and 84±13% of the nitrate load at the quickflow-dominated stream was from concentrated quickflow. Concentrated quickflow nitrate concentrations varied seasonally at both sites, with peak concentrations in the winter that were 2-3 times greater than minimum concentrations during the growing season. Application of this approach provides an opportunity to assess stream vulnerability to non-point source nitrate loading and expected stream responses to current or changing conditions and practices in watersheds.

  6. Saco Bay, Maine: Sediment Budget for Late Twentieth Century to Present

    DTIC Science & Technology

    2016-02-01

    determined that sediment flux was variable, depending on bathymetry and input wave conditions. Despite these variations in conditions, there is no obvious...DETAILS, SACO BAY, MAINE V3. Last update: 11 September 2014 Units are yd3/year. Source1 = bluffs, river influx, wind . Sink1 = wind -blown loss or...Beach05 (B05), Pine Point QSource1 1,600 Wind transport (from Kelley et al. 2005). DeltaV 1,600 Dune accumulation 1859–1991 (from Kelley et al. 2005

  7. Signal or noise? Separating grain size-dependent Nd isotope variability from provenance shifts in Indus delta sediments, Pakistan

    NASA Astrophysics Data System (ADS)

    Jonell, T. N.; Li, Y.; Blusztajn, J.; Giosan, L.; Clift, P. D.

    2017-12-01

    Rare earth element (REE) radioisotope systems, such as neodymium (Nd), have been traditionally used as powerful tracers of source provenance, chemical weathering intensity, and sedimentary processes over geologic timescales. More recently, the effects of physical fractionation (hydraulic sorting) of sediments during transport have called into question the utility of Nd isotopes as a provenance tool. Is source terrane Nd provenance resolvable if sediment transport strongly induces noise? Can grain-size sorting effects be quantified? This study works to address such questions by utilizing grain size analysis, trace element geochemistry, and Nd isotope geochemistry of bulk and grain-size fractions (<63μm, 63-125 μm, 125-250 μm) from the Indus delta of Pakistan. Here we evaluate how grain size effects drive Nd isotope variability and further resolve the total uncertainties associated with Nd isotope compositions of bulk sediments. Results from the Indus delta indicate bulk sediment ɛNd compositions are most similar to the <63 µm fraction as a result of strong mineralogical control on bulk compositions by silt- to clay-sized monazite and/or allanite. Replicate analyses determine that the best reproducibility (± 0.15 ɛNd points) is observed in the 125-250 µm fraction. The bulk and finest fractions display the worst reproducibility (±0.3 ɛNd points). Standard deviations (2σ) indicate that bulk sediment uncertainties are no more than ±1.0 ɛNd points. This argues that excursions of ≥1.0 ɛNd points in any bulk Indus delta sediments must in part reflect an external shift in provenance irrespective of sample composition, grain size, and grain size distribution. Sample standard deviations (2s) estimate that any terrigenous bulk sediment composition should vary no greater than ±1.1 ɛNd points if provenance remains constant. Findings from this study indicate that although there are grain-size dependent Nd isotope effects, they are minimal in the Indus delta such that resolvable provenance-driven trends can be identified in bulk sediment ɛNd compositions over the last 20 k.y., and that overall provenance trends remain consistent with previous findings.

  8. Evaluating watershed protection programs in New York City's Cannonsville Reservoir source watershed using SWAT-HS

    NASA Astrophysics Data System (ADS)

    Hoang, L.; Mukundan, R.; Moore, K. E.; Owens, E. M.; Steenhuis, T. S.

    2017-12-01

    New York City (NYC)'s reservoirs supply over one billion gallons of drinking water each day to over nine million consumers in NYC and upstate communities. The City has invested more than $1.5 billion in watershed protection programs to maintain a waiver from filtration for the Catskill and Delaware Systems. In the last 25 years, the NYC Department of Environmental Protection (NYCDEP) has implemented programs in cooperation with upstate communities that include nutrient management, crop rotations, improvement of barnyards and manure storage, implementing tertiary treatment for Phosphorus (P) in wastewater treatment plants, and replacing failed septic systems in an effort to reduce P loads to water supply reservoirs. There have been several modeling studies evaluating the effect of agricultural Best Management Practices (BMPs) on P control in the Cannonsville watershed in the Delaware System. Although these studies showed that BMPs would reduce dissolved P losses, they were limited to farm-scale or watershed-scale estimates of reduction factors without consideration of the dynamic nature of overland flow and P losses from variable source areas. Recently, we developed the process-based SWAT-Hillslope (SWAT-HS) model, a modified version of the Soil and Water Assessment Tool (SWAT) that can realistically predict variable source runoff processes. The objective of this study is to use the SWAT-HS model to evaluate watershed protection programs addressing both point and non-point sources of P. SWAT-HS predicts streamflow very well for the Cannonsville watershed with a daily Nash Sutcliffe Efficiency (NSE) of 0.85 at the watershed outlet and NSE values ranging from 0.56 - 0.82 at five other locations within the watershed. Based on good hydrological prediction, we applied the model to predict P loads using detailed P inputs that change over time due to the implementation of watershed protection programs. Results from P model predictions provide improved projections of P loads and form a basis for evaluating the cumulative and individual effects of watershed protection programs.

  9. Vertical profiles of ozone, carbon monoxide, and dew-point temperature obtained during GTE/CITE 1, October-November 1983. [Chemical Instrumentation Test and Evaluation

    NASA Technical Reports Server (NTRS)

    Fishman, Jack; Gregory, Gerald L.; Sachse, Glen W.; Beck, Sherwin M.; Hill, Gerald F.

    1987-01-01

    A set of 14 pairs of vertical profiles of ozone and carbon monoxide, obtained with fast-response instrumentation, is presented. Most of these profiles, which were measured in the remote troposphere, also have supporting fast-response dew-point temperature profiles. The data suggest that the continental boundary layer is a source of tropospheric ozone, even in October and November, when photochemical activity should be rather small. In general, the small-scale vertical variability between CO and O3 is in phase. At low latitudes this relationship defines levels in the atmosphere where midlatitude air is being transported to lower latitudes, since lower dew-point temperatures accompany these higher CO and O3 concentrations. A set of profiles which is suggestive of interhemispheric transport is also presented. Independent meteorological analyses support these interpretations.

  10. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  11. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  12. [A landscape ecological approach for urban non-point source pollution control].

    PubMed

    Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing

    2005-05-01

    Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.

  13. Understanding extreme quasar optical variability with CRTS - I. Major AGN flares

    NASA Astrophysics Data System (ADS)

    Graham, Matthew J.; Djorgovski, S. G.; Drake, Andrew J.; Stern, Daniel; Mahabal, Ashish A.; Glikman, Eilat; Larson, Steve; Christensen, Eric

    2017-10-01

    There is a large degree of variety in the optical variability of quasars and it is unclear whether this is all attributable to a single (set of) physical mechanism(s). We present the results of a systematic search for major flares in active galactic nucleus (AGN) in the Catalina Real-time Transient Survey as part of a broader study into extreme quasar variability. Such flares are defined in a quantitative manner as being atop of the normal, stochastic variability of quasars. We have identified 51 events from over 900 000 known quasars and high-probability quasar candidates, typically lasting 900 d and with a median peak amplitude of Δm = 1.25 mag. Characterizing the flare profile with a Weibull distribution, we find that nine of the sources are well described by a single-point single-lens model. This supports the proposal by Lawrence et al. that microlensing is a plausible physical mechanism for extreme variability. However, we attribute the majority of our events to explosive stellar-related activity in the accretion disc: superluminous supernovae, tidal disruption events and mergers of stellar mass black holes.

  14. Laser plasma x-ray source for ultrafast time-resolved x-ray absorption spectroscopy

    DOE PAGES

    Miaja-Avila, L.; O'Neil, G. C.; Uhlig, J.; ...

    2015-03-02

    We describe a laser-driven x-ray plasma source designed for ultrafast x-ray absorption spectroscopy. The source is comprised of a 1 kHz, 20 W, femtosecond pulsed infrared laser and a water target. We present the x-ray spectra as a function of laser energy and pulse duration. Additionally, we investigate the plasma temperature and photon flux as we vary the laser energy. We obtain a 75 μm FWHM x-ray spot size, containing ~10 6 photons/s, by focusing the produced x-rays with a polycapillary optic. Since the acquisition of x-ray absorption spectra requires the averaging of measurements from >10 7 laser pulses, wemore » also present data on the source stability, including single pulse measurements of the x-ray yield and the x-ray spectral shape. In single pulse measurements, the x-ray flux has a measured standard deviation of 8%, where the laser pointing is the main cause of variability. Further, we show that the variability in x-ray spectral shape from single pulses is low, thus justifying the combining of x-rays obtained from different laser pulses into a single spectrum. Finally, we show a static x-ray absorption spectrum of a ferrioxalate solution as detected by a microcalorimeter array. Altogether, our results demonstrate that this water-jet based plasma source is a suitable candidate for laboratory-based time-resolved x-ray absorption spectroscopy experiments.« less

  15. Effectiveness of SWAT in characterizing the watershed hydrology in the snowy-mountainous Lower Bear Malad River (LBMR) watershed in Box Elder County, Utah

    NASA Astrophysics Data System (ADS)

    Salha, A. A.; Stevens, D. K.

    2015-12-01

    Distributed watershed models are essential for quantifying sediment and nutrient loads that originate from point and nonpoint sources. Such models are primary means towards generating pollutant estimates in ungaged watersheds and respond well at watershed scales by capturing the variability in soils, climatic conditions, land uses/covers and management conditions over extended periods of time. This effort evaluates the performance of the Soil and Water Assessment Tool (SWAT) model as a watershed level tool to investigate, manage, and characterize the transport and fate of nutrients in Lower Bear Malad River (LBMR) watershed (Subbasin HUC 16010204) in Utah. Water quality concerns have been documented and are primarily attributed to high phosphorus and total suspended sediment concentrations caused by agricultural and farming practices along with identified point sources (WWTPs). Input data such as Digital Elevation Model (DEM), land use/Land cover (LULC), soils, and climate data for 10 years (2000-2010) is utilized to quantify the LBMR streamflow. Such modeling is useful in developing the required water quality regulations such as Total Maximum Daily Loads (TMDL). Measured concentrations of nutrients were closely captured by simulated monthly nutrient concentrations based on the R2 and Nash- Sutcliffe fitness criteria. The model is expected to be able to identify contaminant non-point sources, identify areas of high pollution risk, locate optimal monitoring sites, and evaluate best management practices to cost-effectively reduce pollution and improve water quality as required by the LBMR watershed's TMDL.

  16. Environmental factors contributing to the accumulation of E. coli in the foreshore sand and porewater at freshwater beaches

    NASA Astrophysics Data System (ADS)

    Vogel, L. J.; Robinson, C. E.; Edge, T.; O'Carroll, D. M.

    2015-12-01

    E. coli concentrations in the foreshore sand and porewater (herein referred to as the foreshore reservoir) at beaches are often elevated relative to adjacent surface waters. There is limited understanding of the factors controlling the delivery and accumulation of E. coli in this reservoir. Understanding the buildup of E. coli, and related microbes, in the foreshore reservoir is important as it can act as a non-point source to surface waters and contribute a significant health risk to beach goers. Possible sources that contribute to high levels of E. coli in the foreshore reservoir include infiltration of lake water through wave runup, direct deposition of fecal sources (e.g. bird droppings), and shallow groundwater flow from inland sources (e.g. septic systems). The accumulation of E. coli in the foreshore reservoir is complex due to the dynamic interactions between the foreshore sand and porewater, and shallow waters. The objective of this study was to quantify the temporal variability of E. coli concentrations in the foreshore sand and porewater at freshwater beaches and to identify the environmental factors (e.g. temperature, rainfall, wind and wave conditions) controlling this variability. The temporal variability in E. coli concentrations in the foreshore reservoir was characterized by collecting samples (surface water, porewater, saturated and unsaturated foreshore sand) approximately once a week at three beaches along on the Great Lakes from May-October 2014 and 2015. These beaches had different sand types ranging from fine to coarse. More frequent sampling was also conducted in July-August 2015 with samples collected daily over a 40 day period at one beach. The data was analyzed to determine the relationships between the E. coli concentrations and environmental variables as well as changes in sand level profiles and groundwater level fluctuations. Insight into how and why E. coli accumulates in the foreshore reservoir is essential to develop effective strategies to reduce E. coli levels at beaches and to enable better prediction of beach water quality.

  17. Ideal engine durations for gamma-ray-burst-jet launch

    NASA Astrophysics Data System (ADS)

    Hamidani, Hamid; Takahashi, Koh; Umeda, Hideyuki; Okita, Shinpei

    2017-08-01

    Aiming to study gamma-ray-burst (GRB) engine duration, we present numerical simulations to investigate collapsar jets. We consider typical explosion energy (1052 erg) but different engine durations, in the widest domain to date from 0.1 to 100 s. We employ an adaptive mesh refinement 2D hydrodynamical code. Our results show that engine duration strongly influences jet nature. We show that the efficiency of launching and collimating relativistic outflow increases with engine duration, until the intermediate engine range where it is the highest, past this point to long engine range, the trend is slightly reversed; we call this point where acceleration and collimation are the highest 'sweet spot' (˜10-30 s). Moreover, jet energy flux shows that variability is also high in this duration domain. We argue that not all engine durations can produce the collimated, relativistic and variable long GRB jets. Considering a typical progenitor and engine energy, we conclude that the ideal engine duration to reproduce a long GRB is ˜10-30 s, where the launch of relativistic, collimated and variable jets is favoured. We note that this duration domain makes a good link with a previous study suggesting that the bulk of Burst and Transient Source Experiment's long GRBs is powered by ˜10-20 s collapsar engines.

  18. Through the Ring of Fire: Gamma-Ray Variability in Blazars by a Moving Plasmoid Passing a Local Source of Seed Photons

    NASA Astrophysics Data System (ADS)

    MacDonald, Nicholas R.; Marscher, Alan P.; Jorstad, Svetlana G.; Joshi, Manasvita

    2015-05-01

    Blazars exhibit flares across the electromagnetic spectrum. Many γ-ray flares are highly correlated with flares detected at optical wavelengths; however, a small subset appears to occur in isolation, with little or no variability detected at longer wavelengths. These “orphan” γ-ray flares challenge current models of blazar variability, most of which are unable to reproduce this type of behavior. We present numerical calculations of the time-variable emission of a blazar based on a proposal by Marscher et al. to explain such events. In this model, a plasmoid (“blob”) propagates relativistically along the spine of a blazar jet and passes through a synchrotron-emitting ring of electrons representing a shocked portion of the jet sheath. This ring supplies a source of seed photons that are inverse-Compton scattered by the electrons in the moving blob. The model includes the effects of radiative cooling, a spatially varying magnetic field, and acceleration of the blob's bulk velocity. Synthetic light curves produced by our model are compared to the observed light curves from an orphan flare that was coincident with the passage of a superluminal knot through the inner jet of the blazar PKS 1510-089. In addition, we present Very Long Baseline Array polarimetric observations that point to the existence of a jet sheath in PKS 1510-089, thus providing further observational support for the plausibility of our model. An estimate of the bolometric luminosity of the sheath within PKS 1510-089 is made, yielding {{L}sh}˜ 3× {{10}45} erg {{s}-1}. This indicates that the sheath within PKS 1510-089 is potentially a very important source of seed photons.

  19. Discovery of variable VHE γ-ray emission from the binary system 1FGL J1018.6-5856

    NASA Astrophysics Data System (ADS)

    H. E. S. S. Collaboration; Abramowski, A.; Aharonian, F.; Ait Benkhali, F.; Akhperjanian, A. G.; Angüner, E. O.; Backes, M.; Balzer, A.; Becherini, Y.; Becker Tjus, J.; Berge, D.; Bernhard, S.; Bernlöhr, K.; Birsin, E.; Blackwell, R.; Böttcher, M.; Boisson, C.; Bolmont, J.; Bordas, P.; Bregeon, J.; Brun, F.; Brun, P.; Bryan, M.; Bulik, T.; Carr, J.; Casanova, S.; Chakraborty, N.; Chalme-Calvet, R.; Chaves, R. C. G.; Chen, A.; Chrétien, M.; Colafrancesco, S.; Cologna, G.; Conrad, J.; Couturier, C.; Cui, Y.; Davids, I. D.; Degrange, B.; Deil, C.; deWilt, P.; Djannati-Ataï, A.; Domainko, W.; Donath, A.; O'C. Drury, L.; Dubus, G.; Dutson, K.; Dyks, J.; Dyrda, M.; Edwards, T.; Egberts, K.; Eger, P.; Ernenwein, J.-P.; Espigat, P.; Farnier, C.; Fegan, S.; Feinstein, F.; Fernandes, M. V.; Fernandez, D.; Fiasson, A.; Fontaine, G.; Förster, A.; Füßling, M.; Gabici, S.; Gajdus, M.; Gallant, Y. A.; Garrigoux, T.; Giavitto, G.; Giebels, B.; Glicenstein, J. F.; Gottschall, D.; Goyal, A.; Grondin, M.-H.; Grudzińska, M.; Hadasch, D.; Häffner, S.; Hahn, J.; Hawkes, J.; Heinzelmann, G.; Henri, G.; Hermann, G.; Hervet, O.; Hillert, A.; Hinton, J. A.; Hofmann, W.; Hofverberg, P.; Hoischen, C.; Holler, M.; Horns, D.; Ivascenko, A.; Jacholkowska, A.; Jahn, C.; Jamrozy, M.; Janiak, M.; Jankowsky, F.; Jung-Richardt, I.; Kastendieck, M. A.; Katarzyński, K.; Katz, U.; Kerszberg, D.; Khélifi, B.; Kieffer, M.; Klepser, S.; Klochkov, D.; Kluźniak, W.; Kolitzus, D.; Komin, Nu.; Kosack, K.; Krakau, S.; Krayzel, F.; Krüger, P. P.; Laffon, H.; Lamanna, G.; Lau, J.; Lefaucheur, J.; Lefranc, V.; Lemière, A.; Lemoine-Goumard, M.; Lenain, J.-P.; Lohse, T.; Lopatin, A.; Lu, C.-C.; Lui, R.; Marandon, V.; Marcowith, A.; Mariaud, C.; Marx, R.; Maurin, G.; Maxted, N.; Mayer, M.; Meintjes, P. J.; Menzler, U.; Meyer, M.; Mitchell, A. M. W.; Moderski, R.; Mohamed, M.; Morå, K.; Moulin, E.; Murach, T.; de Naurois, M.; Niemiec, J.; Oakes, L.; Odaka, H.; Öttl, S.; Ohm, S.; de Oña Wilhelmi, E.; Opitz, B.; Ostrowski, M.; Oya, I.; Panter, M.; Parsons, R. D.; Arribas, M. Paz; Pekeur, N. W.; Pelletier, G.; Petrucci, P.-O.; Peyaud, B.; Pita, S.; Poon, H.; Prokoph, H.; Pühlhofer, G.; Punch, M.; Quirrenbach, A.; Raab, S.; Reichardt, I.; Reimer, A.; Reimer, O.; Renaud, M.; de los Reyes, R.; Rieger, F.; Romoli, C.; Rosier-Lees, S.; Rowell, G.; Rudak, B.; Rulten, C. B.; Sahakian, V.; Salek, D.; Sanchez, D. A.; Santangelo, A.; Sasaki, M.; Schlickeiser, R.; Schüssler, F.; Schulz, A.; Schwanke, U.; Schwemmer, S.; Seyffert, A. S.; Simoni, R.; Sol, H.; Spanier, F.; Spengler, G.; Spies, F.; Stawarz, Ł.; Steenkamp, R.; Stegmann, C.; Stinzing, F.; Stycz, K.; Sushch, I.; Tavernet, J.-P.; Tavernier, T.; Taylor, A. M.; Terrier, R.; Tluczykont, M.; Trichard, C.; Valerius, K.; van der Walt, J.; van Eldik, C.; van Soelen, B.; Vasileiadis, G.; Veh, J.; Venter, C.; Viana, A.; Vincent, P.; Vink, J.; Voisin, F.; Völk, H. J.; Vuillaume, T.; Wagner, S. J.; Wagner, P.; Wagner, R. M.; Weidinger, M.; Weitzel, Q.; White, R.; Wierzcholska, A.; Willmann, P.; Wörnlein, A.; Wouters, D.; Yang, R.; Zabalza, V.; Zaborov, D.; Zacharias, M.; Zdziarski, A. A.; Zech, A.; Zefi, F.; Żywucka, N.

    2015-05-01

    Re-observations with the HESS telescope array of the very high-energy (VHE) source HESS J1018-589 A that is coincident with the Fermi-LAT γ-ray binary 1FGL J1018.6-5856 have resulted in a source detection significance of more than 9σ and the detection of variability (χ2/ν of 238.3/155) in the emitted γ-ray flux. This variability confirms the association of HESS J1018-589 A with the high-energy γ-ray binary detected by Fermi-LAT and also confirms the point-like source as a new VHE binary system. The spectrum of HESS J1018-589 A is best fit with a power-law function with photon index Γ = 2.20 ± 0.14stat ± 0.2sys. Emission is detected up to ~20 TeV. The mean differential flux level is (2.9 ± 0.4) × 10-13 TeV-1 cm-2 s-1 at 1 TeV, equivalent to ~1% of the flux from the Crab Nebula at the same energy. Variability is clearly detected in the night-by-night light curve. When folded on the orbital period of 16.58 days, the rebinned light curve peaks in phase with the observed X-ray and high-energy phaseograms. The fit of the HESS phaseogram to a constant flux provides evidence of periodicity at the level of Nσ> 3σ. The shape of the VHE phaseogram and measured spectrum suggest a low-inclination, low-eccentricity system with amodest impact from VHE γ-ray absorption due to pair production (τ ≲ 1 at 300 GeV).

  20. Prediction of biological integrity based on environmental similarity--revealing the scale-dependent link between study area and top environmental predictors.

    PubMed

    Bedoya, David; Manolakos, Elias S; Novotny, Vladimir

    2011-03-01

    Indices of Biological integrity (IBI) are considered valid indicators of the overall health of a water body because the biological community is an endpoint within natural systems. However, prediction of biological integrity using information from multi-parameter environmental observations is a challenging problem due to the hierarchical organization of the natural environment, the existence of nonlinear inter-dependencies among variables as well as natural stochasticity and measurement noise. We present a method for predicting the Fish Index of Biological Integrity (IBI) using multiple environmental observations at the state-scale in Ohio. Instream (chemical and physical quality) and offstream parameters (regional and local upstream land uses, stream fragmentation, and point source density and intensity) are used for this purpose. The IBI predictions are obtained using the environmental site-similarity concept and following a simple to implement leave-one-out cross validation approach. An IBI prediction for a sampling site is calculated by averaging the observed IBI scores of observations clustered in the most similar branch of a dendrogram--a hierarchical clustering tree of environmental observations--built using the rest of the observations. The standardized Euclidean distance is used to assess dissimilarity between observations. The constructed predictive model was able to explain 61% of the IBI variability statewide. Stream fragmentation and regional land use explained 60% of the variability; the remaining 1% was explained by instream habitat quality. Metrics related to local land use, water quality, and point source density and intensity did not improve the predictive model at the state-scale. The impact of local environmental conditions was evaluated by comparing local characteristics between well- and mispredicted sites. Significant differences in local land use patterns and upstream fragmentation density explained some of the model's over-predictions. Local land use conditions explained some of the model's IBI under-predictions at the state-scale since none of the variables within this group were included in the best final predictive model. Under-predicted sites also had higher levels of downstream fragmentation. The proposed variables ranking and predictive modeling methodology is very well suited for the analysis of hierarchical environments, such as natural fresh water systems, with many cross-correlated environmental variables. It is computationally efficient, can be fully automated, does not make any pre-conceived assumptions on the variables interdependency structure (such as linearity), and it is able to rank variables in a database and generate IBI predictions using only non-parametric easy to implement hierarchical clustering. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  2. Aquatic exposures of chemical mixtures in urban environments: Approaches to impact assessment.

    PubMed

    de Zwart, Dick; Adams, William; Galay Burgos, Malyka; Hollender, Juliane; Junghans, Marion; Merrington, Graham; Muir, Derek; Parkerton, Thomas; De Schamphelaere, Karel A C; Whale, Graham; Williams, Richard

    2018-03-01

    Urban regions of the world are expanding rapidly, placing additional stress on water resources. Urban water bodies serve many purposes, from washing and sources of drinking water to transport and conduits for storm drainage and effluent discharge. These water bodies receive chemical emissions arising from either single or multiple point sources, diffuse sources which can be continuous, intermittent, or seasonal. Thus, aquatic organisms in these water bodies are exposed to temporally and compositionally variable mixtures. We have delineated source-specific signatures of these mixtures for diffuse urban runoff and urban point source exposure scenarios to support risk assessment and management of these mixtures. The first step in a tiered approach to assessing chemical exposure has been developed based on the event mean concentration concept, with chemical concentrations in runoff defined by volumes of water leaving each surface and the chemical exposure mixture profiles for different urban scenarios. Although generalizations can be made about the chemical composition of urban sources and event mean exposure predictions for initial prioritization, such modeling needs to be complemented with biological monitoring data. It is highly unlikely that the current paradigm of routine regulatory chemical monitoring alone will provide a realistic appraisal of urban aquatic chemical mixture exposures. Future consideration is also needed of the role of nonchemical stressors in such highly modified urban water bodies. Environ Toxicol Chem 2018;37:703-714. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.

  3. High-resolution grids of hourly meteorological variables for Germany

    NASA Astrophysics Data System (ADS)

    Krähenmann, S.; Walter, A.; Brienen, S.; Imbery, F.; Matzarakis, A.

    2018-02-01

    We present a 1-km2 gridded German dataset of hourly surface climate variables covering the period 1995 to 2012. The dataset comprises 12 variables including temperature, dew point, cloud cover, wind speed and direction, global and direct shortwave radiation, down- and up-welling longwave radiation, sea level pressure, relative humidity and vapour pressure. This dataset was constructed statistically from station data, satellite observations and model data. It is outstanding in terms of spatial and temporal resolution and in the number of climate variables. For each variable, we employed the most suitable gridding method and combined the best of several information sources, including station records, satellite-derived data and data from a regional climate model. A module to estimate urban heat island intensity was integrated for air and dew point temperature. Owing to the low density of available synop stations, the gridded dataset does not capture all variations that may occur at a resolution of 1 km2. This applies to areas of complex terrain (all the variables), and in particular to wind speed and the radiation parameters. To achieve maximum precision, we used all observational information when it was available. This, however, leads to inhomogeneities in station network density and affects the long-term consistency of the dataset. A first climate analysis for Germany was conducted. The Rhine River Valley, for example, exhibited more than 100 summer days in 2003, whereas in 1996, the number was low everywhere in Germany. The dataset is useful for applications in various climate-related studies, hazard management and for solar or wind energy applications and it is available via doi: 10.5676/DWD_CDC/TRY_Basis_v001.

  4. A pseudo-penalized quasi-likelihood approach to the spatial misalignment problem with non-normal data.

    PubMed

    Lopiano, Kenneth K; Young, Linda J; Gotway, Carol A

    2014-09-01

    Spatially referenced datasets arising from multiple sources are routinely combined to assess relationships among various outcomes and covariates. The geographical units associated with the data, such as the geographical coordinates or areal-level administrative units, are often spatially misaligned, that is, observed at different locations or aggregated over different geographical units. As a result, the covariate is often predicted at the locations where the response is observed. The method used to align disparate datasets must be accounted for when subsequently modeling the aligned data. Here we consider the case where kriging is used to align datasets in point-to-point and point-to-areal misalignment problems when the response variable is non-normally distributed. If the relationship is modeled using generalized linear models, the additional uncertainty induced from using the kriging mean as a covariate introduces a Berkson error structure. In this article, we develop a pseudo-penalized quasi-likelihood algorithm to account for the additional uncertainty when estimating regression parameters and associated measures of uncertainty. The method is applied to a point-to-point example assessing the relationship between low-birth weights and PM2.5 levels after the onset of the largest wildfire in Florida history, the Bugaboo scrub fire. A point-to-areal misalignment problem is presented where the relationship between asthma events in Florida's counties and PM2.5 levels after the onset of the fire is assessed. Finally, the method is evaluated using a simulation study. Our results indicate the method performs well in terms of coverage for 95% confidence intervals and naive methods that ignore the additional uncertainty tend to underestimate the variability associated with parameter estimates. The underestimation is most profound in Poisson regression models. © 2014, The International Biometric Society.

  5. Inferring Models of Bacterial Dynamics toward Point Sources

    PubMed Central

    Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve

    2015-01-01

    Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373

  6. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour

    PubMed Central

    Moranda, Arianna

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328

  7. A Method for Identifying Pollution Sources of Heavy Metals and PAH for a Risk-Based Management of a Mediterranean Harbour.

    PubMed

    Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi

    2017-01-01

    A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.

  8. Local point sources that affect ground-water quality in the East Meadow area, Long Island, New York

    USGS Publications Warehouse

    Heisig, Paul M.

    1994-01-01

    The extent and chemical characteristics of ground water affected by three local point sources--a stormwater basin, uncovered road-salt-storage piles, and an abandoned sewage-treatment plant--were delineated during a 3-year study of the chemical characteristics and migration of a body of reclaimed wastewater that was applied to the watertable aquifer during recharge experiments from October 1982 through January 1984 in East Meadow. The timing, magnitude, and chemical quality of recharge from these point sources is highly variable, and all sources have the potential to skew determinations of the quality of ambient ground-water and of the reclaimed-wastewater plume if they are not taken into account. Ground water affected by recharge from the stormwater basin is characterized by low concentrations of nitrate + nitrite (less than 5 mg/L [milligrams per liter] as N) and sulfate (less than 40 mg/L) and is almost entirely within the upper glacial aquifer. The plume derived from road-salt piles is narrow, has high concentrations of chloride (greater than 50 mg/L) and sodium (greater than 75 mg/L), and also is limited to the upper glacial aquifer. The sodium, in high concentrations, could react with aquifer material and exchange for sorbed cations such as calcium, potassium, and magnesium. Water affected by secondary-treated sewage from the abandoned treatment plant extends 152 feet below land surface into the upper part of the Magothy aquifer and longitudinally beyond the southern edge of the study area, 7,750 feet south of the recharge site. Ground water affected by secondary-treated sewage within the study area typically contains elevated concentrations of reactive chemical constituents, such as potassium and ammonium, and low concentrations of dissolved oxygen. Conservative or minimally reactive constituents such as chloride and sodium have been transported out of the study area in the upper glacial aquifer and the intermediate (transitional) zone but remain in the less permeable upper part of the Magothy aquifer. Identification of the three point sources and delineation of their areas of influence improved definition of ambient ground-water quality and delineation of the reclaimed-wastewater plume.

  9. The Origin of Soft X-rays in DQ Herculis

    NASA Technical Reports Server (NTRS)

    White, Nicholas E. (Technical Monitor); Mukai, K.; Still, M.; Ringwald, F. A.

    2002-01-01

    DQ Herculis (Nova Herculis 1934) is a deeply eclipsing cataclysmic variable containing a magnetic white dwarf primary. The accretion disk is thought to block our line of sight to the white dwarf at all orbital phases due to its extreme inclination angle. Nevertheless, soft X-rays were detected from DQ Her with ROSAT PSPC. To probe the origin of these soft X-rays, we have performed Chandra ACIS observations. We confirm that DQ Her is an X-ray source. The bulk of the X-rays are from a point-like source and exhibit a shallow partial eclipse. We interpret this as due to scattering of the unseen central X-ray source, probably in an accretion disk wind. At the same time, we detect weak extended X-ray features around DQ Her, which we interpret as an X-ray emitting knot in the nova shell.

  10. The Ionosphere's Pocket Litter: Exploiting Crowd-Sourced Observations

    NASA Astrophysics Data System (ADS)

    Miller, E. S.; Frissell, N. A.; Kaeppler, S. R.; Demajistre, R.; Knuth, A. A.

    2015-12-01

    One of the biggest challenges faced in developing and testing our understanding of the ionosphere is acquiring data that characterizes the latitudinal and longitudinal variability of the ionosphere. While there are extensive networks of ground sites that sample the vertical distribution, we have rather poor coverage over the oceans and in parts of the southern hemisphere. Our ability to validate the ionospheric models is limited by the lack of point measurements and those measurements that essentially constitute characterization of horizontal gradients. In this talk, we discuss and demonstrate the use of various types of crowd-sourced information that enables us to extend our coverage over these regions. We will discuss new sources of these data, concepts for new experiments and the use of these data in assimilative models. We note that there are new, low cost options for obtaining data that broaden the participation beyond the aeronomy/ionospheric community.

  11. Direct comparison of linear and macrocyclic compound libraries as a source of protein ligands.

    PubMed

    Gao, Yu; Kodadek, Thomas

    2015-03-09

    There has been much discussion of the potential desirability of macrocyclic molecules for the development of tool compounds and drug leads. But there is little experimental data comparing otherwise equivalent macrocyclic and linear compound libraries as a source of protein ligands. In this Letter, we probe this point in the context of peptoid libraries. Bead-displayed libraries of macrocyclic and linear peptoids containing four variable positions and 0-2 fixed residues, to vary the ring size, were screened against streptavidin and the affinity of every hit for the target was measured. The data show that macrocyclization is advantageous, but only when the ring contains 17 atoms, not 20 or 23 atoms. This technology will be useful for conducting direct comparisons between many different types of chemical libraries to determine their relative utility as a source of protein ligands.

  12. Assessment of the relationship between rural non-point source pollution and economic development in the Three Gorges Reservoir Area.

    PubMed

    Zhang, Tong; Ni, Jiupai; Xie, Deti

    2016-04-01

    This study investigates the relationship between rural non-point source (NPS) pollution and economic development in the Three Gorges Reservoir Area (TGRA) by using the Environmental Kuznets Curve (EKC) hypothesis for the first time. Five types of pollution indicators, namely, fertilizer input density (FD), pesticide input density (PD), agricultural film input density (AD), grain residues impact (GI), and livestock manure impact (MI), were selected as rural NPS pollutant variables. Rural net income per capita was used as the indicator of economic development. Pollution load was generated by agricultural inputs (consumption of fertilizer, pesticide, and agricultural film) and economic growth with invert U-shaped features. The predicted turning points for FD, PD, and AD were at rural net income per capita levels of 6167.64, 6205.02, and 4955.29 CNY, respectively, which were all surpassed. However, the features between agricultural waste outputs (grain residues and livestock manure) and economic growth were inconsistent with the EKC hypothesis, which reflected the current trends of agricultural economic structure in the TGRA. Given that several other factors aside from economic development level could influence the pollutant generation in rural NPS, a further examination with long-run data support should be performed to understand the relationship between rural NPS pollution and income level.

  13. Altered network topology in pediatric traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Dennis, Emily L.; Rashid, Faisal; Babikian, Talin; Mink, Richard; Babbitt, Christopher; Johnson, Jeffrey; Giza, Christopher C.; Asarnow, Robert F.; Thompson, Paul M.

    2017-11-01

    Outcome after a traumatic brain injury (TBI) is quite variable, and this variability is not solely accounted for by severity or demographics. Identifying sub-groups of patients who recover faster or more fully will help researchers and clinicians understand sources of this variability, and hopefully lead to new therapies for patients with a more prolonged recovery profile. We have previously identified two subgroups within the pediatric TBI patient population with different recovery profiles based on an ERP-derived (event-related potential) measure of interhemispheric transfer time (IHTT). Here we examine structural network topology across both patient groups and healthy controls, focusing on the `rich-club' - the core of the network, marked by high degree nodes. These analyses were done at two points post-injury - 2-5 months (post-acute), and 13-19 months (chronic). In the post-acute time-point, we found that the TBI-slow group, those showing longitudinal degeneration, showed hyperconnectivity within the rich-club nodes relative to the healthy controls, at the expense of local connectivity. There were minimal differences between the healthy controls and the TBI-normal group (those patients who show signs of recovery). At the chronic phase, these disruptions were no longer significant, but closer analysis showed that this was likely due to the loss of power from a smaller sample size at the chronic time-point, rather than a sign of recovery. We have previously shown disruptions to white matter (WM) integrity that persist and progress over time in the TBI-slow group, and here we again find differences in the TBI-slow group that fail to resolve over the first year post-injury.

  14. GRBs as standard candles: There is no “circularity problem” (and there never was)

    NASA Astrophysics Data System (ADS)

    Graziani, Carlo

    2011-02-01

    Beginning with the 2002 discovery of the "Amati Relation" of GRB spectra, there has been much interest in the possibility that this and other correlations of GRB phenomenology might be used to make GRBs into standard candles. One recurring apparent difficulty with this program has been that some of the primary observational quantities to be fit as "data" - to wit, the isotropic-equivalent prompt energy Eiso and the collimation-corrected "total" prompt energy Eγ - depend for their construction on the very cosmological models that they are supposed to help constrain. This is the so-called "circularity problem" of standard candle GRBs. This paper is intended to point out that the circularity problem is not in fact a problem at all, except to the extent that it amounts to a self-inflicted wound. It arises essentially because of an unfortunate choice of data variables - "source-frame" variables such as Eiso, which are unnecessarily encumbered by cosmological considerations. If, instead, the empirical correlations of GRB phenomenology which are formulated in source-variables are mapped to the primitive observational variables (such as fluence) and compared to the observations in that space, then all taint of circularity disappears. I also indicate here a set of procedures for encoding high-dimensional empirical correlations (such as between Eiso, Epk(src),tjet(src), and T45(src)) in a "Gaussian Tube" smeared model that includes both the correlation and its intrinsic scatter, and how that source-variable model may easily be mapped to the space of primitive observables, to be convolved with the measurement errors and fashioned into a likelihood. I discuss the projections of such Gaussian tubes into sub-spaces, which may be used to incorporate data from GRB events that may lack some element of the data (for example, GRBs without ascertained jet-break times). In this way, a large set of inhomogeneously observed GRBs may be assimilated into a single analysis, so long as each possesses at least two correlated data attributes.

  15. Exposure Estimation and Interpretation of Occupational Risk: Enhanced Information for the Occupational Risk Manager

    PubMed Central

    Waters, Martha; McKernan, Lauralynn; Maier, Andrew; Jayjock, Michael; Schaeffer, Val; Brosseau, Lisa

    2015-01-01

    The fundamental goal of this article is to describe, define, and analyze the components of the risk characterization process for occupational exposures. Current methods are described for the probabilistic characterization of exposure, including newer techniques that have increasing applications for assessing data from occupational exposure scenarios. In addition, since the probability of health effects reflects variability in the exposure estimate as well as the dose-response curve—the integrated considerations of variability surrounding both components of the risk characterization provide greater information to the occupational hygienist. Probabilistic tools provide a more informed view of exposure as compared to use of discrete point estimates for these inputs to the risk characterization process. Active use of such tools for exposure and risk assessment will lead to a scientifically supported worker health protection program. Understanding the bases for an occupational risk assessment, focusing on important sources of variability and uncertainty enables characterizing occupational risk in terms of a probability, rather than a binary decision of acceptable risk or unacceptable risk. A critical review of existing methods highlights several conclusions: (1) exposure estimates and the dose-response are impacted by both variability and uncertainty and a well-developed risk characterization reflects and communicates this consideration; (2) occupational risk is probabilistic in nature and most accurately considered as a distribution, not a point estimate; and (3) occupational hygienists have a variety of tools available to incorporate concepts of risk characterization into occupational health and practice. PMID:26302336

  16. An Assessment of the Impact of Hafting on Paleoindian Point Variability

    PubMed Central

    Buchanan, Briggs; O'Brien, Michael J.; Kilby, J. David; Huckell, Bruce B.; Collard, Mark

    2012-01-01

    It has long been argued that the form of North American Paleoindian points was affected by hafting. According to this hypothesis, hafting constrained point bases such that they are less variable than point blades. The results of several studies have been claimed to be consistent with this hypothesis. However, there are reasons to be skeptical of these results. None of the studies employed statistical tests, and all of them focused on points recovered from kill and camp sites, which makes it difficult to be certain that the differences in variability are the result of hafting rather than a consequence of resharpening. Here, we report a study in which we tested the predictions of the hafting hypothesis by statistically comparing the variability of different parts of Clovis points. We controlled for the potentially confounding effects of resharpening by analyzing largely unused points from caches as well as points from kill and camp sites. The results of our analyses were not consistent with the predictions of the hypothesis. We found that several blade characters and point thickness were no more variable than the base characters. Our results indicate that the hafting hypothesis does not hold for Clovis points and indicate that there is a need to test its applicability in relation to post-Clovis Paleoindian points. PMID:22666320

  17. Hydro power flexibility for power systems with variable renewable energy sources: an IEA Task 25 collaboration: Hydro power flexibility for power systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huertas-Hernando, Daniel; Farahmand, Hossein; Holttinen, Hannele

    2016-06-20

    Hydro power is one of the most flexible sources of electricity production. Power systems with considerable amounts of flexible hydro power potentially offer easier integration of variable generation, e.g., wind and solar. However, there exist operational constraints to ensure mid-/long-term security of supply while keeping river flows and reservoirs levels within permitted limits. In order to properly assess the effective available hydro power flexibility and its value for storage, a detailed assessment of hydro power is essential. Due to the inherent uncertainty of the weather-dependent hydrological cycle, regulation constraints on the hydro system, and uncertainty of internal load as wellmore » as variable generation (wind and solar), this assessment is complex. Hence, it requires proper modeling of all the underlying interactions between hydro power and the power system, with a large share of other variable renewables. A summary of existing experience of wind integration in hydro-dominated power systems clearly points to strict simulation methodologies. Recommendations include requirements for techno-economic models to correctly assess strategies for hydro power and pumped storage dispatch. These models are based not only on seasonal water inflow variations but also on variable generation, and all these are in time horizons from very short term up to multiple years, depending on the studied system. Another important recommendation is to include a geographically detailed description of hydro power systems, rivers' flows, and reservoirs as well as grid topology and congestion.« less

  18. The Norma arm region Chandra survey catalog: X-ray populations in the spiral arms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fornasini, Francesca M.; Tomsick, John A.; Bodaghee, Arash

    2014-12-01

    We present a catalog of 1415 X-ray sources identified in the Norma Arm Region Chandra Survey (NARCS), which covers a 2° × 0.°8 region in the direction of the Norma spiral arm to a depth of ≈20 ks. Of these sources, 1130 are point-like sources detected with ≥3σ confidence in at least one of three energy bands (0.5-10, 0.5-2, and 2-10 keV), five have extended emission, and the remainder are detected at low significance. Since most sources have too few counts to permit individual classification, they are divided into five spectral groups defined by their quantile properties. We analyze stackedmore » spectra of X-ray sources within each group, in conjunction with their fluxes, variability, and infrared counterparts, to identify the dominant populations in our survey. We find that ∼50% of our sources are foreground sources located within 1-2 kpc, which is consistent with expectations from previous surveys. Approximately 20% of sources are likely located in the proximity of the Scutum-Crux and near Norma arm, while 30% are more distant, in the proximity of the far Norma arm or beyond. We argue that a mixture of magnetic and nonmagnetic cataclysmic variables dominates the Scutum-Crux and near Norma arms, while intermediate polars and high-mass stars (isolated or in binaries) dominate the far Norma arm. We also present the cumulative number count distribution for sources in our survey that are detected in the hard energy band. A population of very hard sources in the vicinity of the far Norma arm and active galactic nuclei dominate the hard X-ray emission down to f{sub X} ≈ 10{sup –14} erg cm{sup –2} s{sup –1}, but the distribution curve flattens at fainter fluxes. We find good agreement between the observed distribution and predictions based on other surveys.« less

  19. The Chandra Source Catalog: Source Properties and Data Products

    NASA Astrophysics Data System (ADS)

    Rots, Arnold; Evans, Ian N.; Glotfelty, Kenny J.; Primini, Francis A.; Zografou, Panagoula; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.

    2009-09-01

    The Chandra Source Catalog (CSC) is breaking new ground in several areas. There are two aspects that are of particular interest to the users: its evolution and its contents. The CSC will be a living catalog that becomes richer, bigger, and better in time while still remembering its state at each point in time. This means that users will be able to take full advantage of new additions to the catalog, while retaining the ability to back-track and return to what was extracted in the past. The CSC sheds the limitations of flat-table catalogs. Its sources will be characterized by a large number of properties, as usual, but each source will also be associated with its own specific data products, allowing users to perform mini custom analysis on the sources. Source properties fall in the spatial (position, extent), photometric (fluxes, count rates), spectral (hardness ratios, standard spectral fits), and temporal (variability probabilities) domains, and are all accompanied by error estimates. Data products cover the same coordinate space and include event lists, images, spectra, and light curves. In addition, the catalog contains data products covering complete observations: event lists, background images, exposure maps, etc. This work is supported by NASA contract NAS8-03060 (CXC).

  20. A Systematic Search for Short-term Variability of EGRET Sources

    NASA Technical Reports Server (NTRS)

    Wallace, P. M.; Griffis, N. J.; Bertsch, D. L.; Hartman, R. C.; Thompson, D. J.; Kniffen, D. A.; Bloom, S. D.

    2000-01-01

    The 3rd EGRET Catalog of High-energy Gamma-ray Sources contains 170 unidentified sources, and there is great interest in the nature of these sources. One means of determining source class is the study of flux variability on time scales of days; pulsars are believed to be stable on these time scales while blazers are known to be highly variable. In addition, previous work has demonstrated that 3EG J0241-6103 and 3EG J1837-0606 are candidates for a new gamma-ray source class. These sources near the Galactic plane display transient behavior but cannot be associated with any known blazers. Although, many instances of flaring AGN have been reported, the EGRET database has not been systematically searched for occurrences of short-timescale (approximately 1 day) variability. These considerations have led us to conduct a systematic search for short-term variability in EGRET data, covering all viewing periods through proposal cycle 4. Six 3EG catalog sources are reported here to display variability on short time scales; four of them are unidentified. In addition, three non-catalog variable sources are discussed.

  1. The effect of directivity in a PSHA framework

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2012-09-01

    We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.

  2. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  3. Improving risk estimates of runoff producing areas: formulating variable source areas as a bivariate process.

    PubMed

    Cheng, Xiaoya; Shaw, Stephen B; Marjerison, Rebecca D; Yearick, Christopher D; DeGloria, Stephen D; Walter, M Todd

    2014-05-01

    Predicting runoff producing areas and their corresponding risks of generating storm runoff is important for developing watershed management strategies to mitigate non-point source pollution. However, few methods for making these predictions have been proposed, especially operational approaches that would be useful in areas where variable source area (VSA) hydrology dominates storm runoff. The objective of this study is to develop a simple approach to estimate spatially-distributed risks of runoff production. By considering the development of overland flow as a bivariate process, we incorporated both rainfall and antecedent soil moisture conditions into a method for predicting VSAs based on the Natural Resource Conservation Service-Curve Number equation. We used base-flow immediately preceding storm events as an index of antecedent soil wetness status. Using nine sub-basins of the Upper Susquehanna River Basin, we demonstrated that our estimated runoff volumes and extent of VSAs agreed with observations. We further demonstrated a method for mapping these areas in a Geographic Information System using a Soil Topographic Index. The proposed methodology provides a new tool for watershed planners for quantifying runoff risks across watersheds, which can be used to target water quality protection strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Nitrate variability in groundwater of North Carolina using monitoring and private well data models.

    PubMed

    Messier, Kyle P; Kane, Evan; Bolich, Rick; Serre, Marc L

    2014-09-16

    Nitrate (NO3-) is a widespread contaminant of groundwater and surface water across the United States that has deleterious effects to human and ecological health. This study develops a model for predicting point-level groundwater NO3- at a state scale for monitoring wells and private wells of North Carolina. A land use regression (LUR) model selection procedure is developed for determining nonlinear model explanatory variables when they are known to be correlated. Bayesian Maximum Entropy (BME) is used to integrate the LUR model to create a LUR-BME model of spatial/temporal varying groundwater NO3- concentrations. LUR-BME results in a leave-one-out cross-validation r2 of 0.74 and 0.33 for monitoring and private wells, effectively predicting within spatial covariance ranges. Results show significant differences in the spatial distribution of groundwater NO3- contamination in monitoring versus private wells; high NO3- concentrations in the southeastern plains of North Carolina; and wastewater treatment residuals and swine confined animal feeding operations as local sources of NO3- in monitoring wells. Results are of interest to agencies that regulate drinking water sources or monitor health outcomes from ingestion of drinking water. Lastly, LUR-BME model estimates can be integrated into surface water models for more accurate management of nonpoint sources of nitrogen.

  5. Influence of commercial information on prescription quantity in primary care.

    PubMed

    Caamaño, Francisco; Figueiras, Adolfo; Gestal-Otero, Juan Jesus

    2002-09-01

    In the last few years we have witnessed many publicly-financed health services reaching a crisis point. Thus, drug expenditure is nowadays one of the main concerns of health managers, and its containment one of the first goals of health authorities in western countries. The objective of this study is to identify the effect of the perceived quality stated in commercial information, its uses, and how physicians perceive the influence it has on prescription amounts. A cross-sectional study of 405 primary care physicians was conducted in Galicia (north-west Spain). The independent variables physician's education and speciality, physician's perception of the quality of available drug information sources, type of practice, and number of patients were collected, through a postal questionnaire. Environmental characteristics of the practice were obtained from secondary sources. Multiple regression models were constructed using as dependent variables two indicators of prescription volume. The response rate was 75.2%. Prescription amounts was found to be associated with perceived credibility of information provided by medical visitors, regulated physician training, and environmental characteristics of the practice (primary care team practice, urban environment). The study results suggest that in order to decrease prescription amounts it is necessary to limit the role of pharmaceutical companies in physician training, improve physician education and training, and emphasize more objective sources of information.

  6. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  7. Optimized Reduction of Unsteady Radial Forces in a Singlechannel Pump for Wastewater Treatment

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Hyuk; Cho, Bo-Min; Choi, Young-Seok; Lee, Kyoung-Yong; Peck, Jong-Hyeon; Kim, Seon-Chang

    2016-11-01

    A single-channel pump for wastewater treatment was optimized to reduce unsteady radial force sources caused by impeller-volute interactions. The steady and unsteady Reynolds- averaged Navier-Stokes equations using the shear-stress transport turbulence model were discretized by finite volume approximations and solved on tetrahedral grids to analyze the flow in the single-channel pump. The sweep area of radial force during one revolution and the distance of the sweep-area center of mass from the origin were selected as the objective functions; the two design variables were related to the internal flow cross-sectional area of the volute. These objective functions were integrated into one objective function by applying the weighting factor for optimization. Latin hypercube sampling was employed to generate twelve design points within the design space. A response-surface approximation model was constructed as a surrogate model for the objectives, based on the objective function values at the generated design points. The optimized results showed considerable reduction in the unsteady radial force sources in the optimum design, relative to those of the reference design.

  8. An improved DPSM technique for modelling ultrasonic fields in cracked solids

    NASA Astrophysics Data System (ADS)

    Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique

    2007-04-01

    In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

  9. On the assessment of spatial resolution of PET systems with iterative image reconstruction

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Cherry, Simon R.; Qi, Jinyi

    2016-03-01

    Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.

  10. PAH molecular diagnostic ratios applied to atmospheric sources: a critical evaluation using two decades of source inventory and air concentration data from the UK.

    PubMed

    Katsoyiannis, Athanasios; Sweetman, Andrew J; Jones, Kevin C

    2011-10-15

    Molecular diagnostic ratios (MDRs)-the ratios of defined pairs of individual compounds-have been widely used as markers of different source categories of polycyclic aromatic hydrocarbons (PAHs). However, it is well-known that variations in combustion conditions and environmental degradation processes can cause substantial variability in the emission and degradation of individual compounds, potentially undermining the application of MDRs as reliable source apportionment tools. The United Kingdom produces a national inventory of atmospheric emissions of PAHs, and has an ambient air monitoring program at a range of rural, semirural, urban, and industrial sites. The inventory and the monitoring data are available over the past 20 years (1990-2010), a time frame that has seen known changes in combustion type and source. Here we assess 5 MDRs that have been used in the literature as source markers. We examine the spatial and temporal variability in the ratios and consider whether they are responsive to known differences in source strength and types between sites (on rural-urban gradients) and to underlying changes in national emissions since 1990. We conclude that the use of these 5 MDRs produces contradictory results and that they do not respond to known differences (in time and space) in atmospheric emission sources. For example, at a site near a motorway and far from other evident emission sources, the use of MDRs suggests "non-traffic" emissions. The ANT/(ANT + PHE) ratio is strongly seasonal at some sites; it is the most susceptible MDR to atmospheric processes, so these results illustrate how weathering in the environment will undermine the effectiveness of MDRs as markers of source(s). We conclude that PAH MDRs can exhibit spatial and temporal differences, but they are not valid markers of known differences in source categories and type. Atmospheric sources of PAHs in the UK are probably not dominated by any single clear and strong source type, so the mixture of PAHs in air is quickly "blended" away from the influence of the few major point sources which exist and further weathered in the environment by atmospheric reactions and selective loss processes.

  11. Identification of Dust Source Regions at High-Resolution and Dynamics of Dust Source Mask over Southwest United States Using Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Sprigg, W. A.; Sahoo, S.; Prasad, A. K.; Venkatesh, A. S.; Vukovic, A.; Nickovic, S.

    2015-12-01

    Identification and evaluation of sources of aeolian mineral dust is a critical task in the simulation of dust. Recently, time series of space based multi-sensor satellite images have been used to identify and monitor changes in the land surface characteristics. Modeling of windblown dust requires precise delineation of mineral dust source and its strength that varies over a region as well as seasonal and inter-annual variability due to changes in land use and land cover. Southwest USA is one of the major dust emission prone zone in North American continent where dust is generated from low lying dried-up areas with bare ground surface and they may be scattered or appear as point sources on high resolution satellite images. In the current research, various satellite derived variables have been integrated to produce a high-resolution dust source mask, at grid size of 250 m, using data such as digital elevation model, surface reflectance, vegetation cover, land cover class, and surface wetness. Previous dust source models have been adopted to produce a multi-parameter dust source mask using data from satellites such as Terra (Moderate Resolution Imaging Spectroradiometer - MODIS), and Landsat. The dust source mask model captures the topographically low regions with bare soil surface, dried-up river plains, and lakes which form important source of dust in southwest USA. The study region is also one of the hottest regions of USA where surface dryness, land use (agricultural use), and vegetation cover changes significantly leading to major changes in the areal coverage of potential dust source regions. A dynamic high resolution dust source mask have been produced to address intra-annual change in the aerial extent of bare dry surfaces. Time series of satellite derived data have been used to create dynamic dust source masks. A new dust source mask at 16 day interval allows enhanced detection of potential dust source regions that can be employed in the dust emission and transport pathways models for better estimation of emission of dust during dust storms, particulate air pollution, public health risk assessment tools and decision support systems.

  12. Three millisecond pulsars in FERMI LAT unassociated bright sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ransom, S. M.; Ray, P. S.; Camilo, F.

    2010-12-23

    We searched for radio pulsars in 25 of the non-variable, unassociated sources in the Fermi LAT Bright Source List with the Green Bank Telescope at 820 MHz. Here, we report the discovery of three radio and γ-ray millisecond pulsars (MSPs) from a high Galactic latitude subset of these sources. All of the pulsars are in binary systems, which would have made them virtually impossible to detect in blind γ-ray pulsation searches. They seem to be relatively normal, nearby (≤2 kpc) MSPs. These observations, in combination with the Fermi detection of γ-rays from other known radio MSPs, imply that most, ifmore » not all, radio MSPs are efficient γ-ray producers. The γ-ray spectra of the pulsars are power law in nature with exponential cutoffs at a few GeV, as has been found with most other pulsars. The MSPs have all been detected as X-ray point sources. Finally, their soft X-ray luminosities of ~10 30-10 31 erg s –1 are typical of the rare radio MSPs seen in X-rays.« less

  13. Three Millisecond Pulsars in Fermi LAT Unassociated Bright Sources

    NASA Technical Reports Server (NTRS)

    Ransom, S. M.; Ray, P. S.; Camilo, F.; Roberts, M. S. E.; Celik, O.; Wolff, M. T.; Cheung, C. C.; Kerr, M.; Pennucci, T.; DeCesar, M. E.; hide

    2010-01-01

    We searched for radio pulsars in 25 of the non-variable, unassociated sources in the Fermi LAT Bright Source List with the Green Bank Telescope at 820 MHz. We report the discovery of three radio and gamma-ray millisecond pulsar (MSPs) from a high Galactic latitude subset of these sources. All of the pulsars are in binary systems, which would have made them virtually impossible to detect in blind gamma-ray pulsation searches. They seem to be relatively normal, nearby (<= 2 kpc) MSPs. These observations, in combination with the Fermi detection of gamma-rays from other known radio MSPs, imply that most, if not all, radio MSPs are efficient gamma-ray producers. The gamma-ray spectra of the pulsars are power law in nature with exponential cutoffs at a few Ge V, as has been found with most other pulsars. The MSPs have all been detected as X-ray point sources. Their soft X-ray luminosities of approx 10(exp 30) - 10(exp 31) erg/s are typical of the rare radio MSPs seen in X-rays.

  14. Energy storage requirements of dc microgrids with high penetration renewables under droop control

    DOE PAGES

    Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...

    2015-01-09

    Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less

  15. Using vadose zone data and spatial statistics to assess the impact of cultivated land and dairy waste lagoons on groundwater contamination

    NASA Astrophysics Data System (ADS)

    Baram, S.; Ronen, Z.; Kurtzman, D.; Peeters, A.; Dahan, O.

    2013-12-01

    Land cultivation and dairy waste lagoons are considered to be nonpoint and point sources of groundwater contamination by chloride (Cl-) and nitrate (NO3-). The objective of this work is to introduce a methodology to assess the past and future impacts of such agricultural activities on regional groundwater quality. The method is based on mass balances and on spatial statistical analysis of Cl- and NO3-concentration distributions in the saturated and unsaturated zones. The method enables quantitative analysis of the relation between the locations of pollution point sources and the spatial variability in Cl- and NO3- concentrations in groundwater. The method was applied to the Beer-Tuvia region, Israel, where intensive dairy farming along with land cultivation has been practiced for over 50 years above the local phreatic aquifer. Mass balance calculations accounted for the various groundwater recharge and abstraction sources and sinks in the entire region. The mass balances showed that leachates from lagoons and the cultivated land have contributed 6.0 and 89.4 % of the total mass of Cl- added to the aquifer and 12.6 and 77.4 % of the total mass of NO3-. The chemical composition of the aquifer and vadose zone water suggested that irrigated agricultural activity in the region is the main contributor of Cl- and NO3- to the groundwater. A low spatial correlation between the Cl- and NO3- concentrations in the groundwater and the on-land location of the dairy farms strengthened this assumption, despite the dairy waste lagoon being a point source for groundwater contamination by Cl- and NO3-. Results demonstrate that analyzing vadose zone and groundwater data by spatial statistical analysis methods can significantly contribute to the understanding of the relations between groundwater contaminating sources, and to assessing appropriate remediation steps.

  16. Investigating the effects of methodological expertise and data randomness on the robustness of crowd-sourced SfM terrain models

    NASA Astrophysics Data System (ADS)

    Ratner, Jacqueline; Pyle, David; Mather, Tamsin

    2015-04-01

    Structure-from-motion (SfM) techniques are now widely available to quickly and cheaply generate digital terrain models (DTMs) from optical imagery. Topography can change rapidly during disaster scenarios and change the nature of local hazards, making ground-based SfM a particularly useful tool in hazard studies due to its low cost, accessibility, and potential for immediate deployment. Our study is designed to serve as an analogue to potential real-world use of the SfM method if employed for disaster risk reduction purposes. Experiments at a volcanic crater in Santorini, Greece, used crowd-sourced data collection to demonstrate the impact of user expertise and randomization of SfM data on the resultant DTM. Three groups of participants representing variable expertise levels utilized 16 different camera models, including four camera phones, to collect 1001 total photos in one hour of data collection. Datasets collected by each group were processed using the free and open source software VisualSFM. The point densities and overall quality of the resultant SfM point clouds were compared against each other and also against a LiDAR dataset for reference to the industry standard. Our results show that the point clouds are resilient to changes in user expertise and collection method and are comparable or even preferable in data density to LiDAR. We find that 'crowd-sourced' data collected by a moderately informed general public yields topography results comparable to those produced with data collected by experts. This means that in a real-world scenario involving participants with a diverse range of expertise levels, topography models could be produced from crowd-sourced data quite rapidly and to a very high standard. This could be beneficial to disaster risk reduction as a relatively quick, simple, and low-cost method to attain a rapidly updated knowledge of terrain attributes, useful for the prediction and mitigation of many natural hazards.

  17. Factors related to severe untreated tooth decay in rural adolescents: a case-control study for public health planning.

    PubMed

    Skaret, E; Weinstein, P; Milgrom, P; Kaakko, T; Getz, T

    2004-01-01

    In this case-control study of rural adolescents we identified factors to discriminate those who have high levels of tooth decay and receive treatment from those with similar levels who receive no treatment. The sample was drawn from all 12-20-year-olds (n = 439) in a rural high school in Washington State, U.S. The criterion for being included was 5 or more decayed, missing or filled teeth. The questionnaire included structure, history, cognition and expectation variables based on a model by Grembowski, Andersen and Chen. No structural variable was related to the dependent variable. Two of 10 history variables were related: perceived poor own dental health and perceived poor mother's dental health. Four of eight cognition variables were also predictive: negative beliefs about the dentist, not planning to go to a dentist even if having severe problems, not being in any club or playing on a sports team and not having a best friend. No relationship was found for the expectation variable 'usual source of care'. These data are consistent with the hypothesis that untreated tooth decay is associated with avoidance of care and point to the importance of history and cognition variables in planning efforts to improve oral health of rural adolescents.

  18. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  19. Surface-water nutrient conditions and sources in the United States Pacific Northwest

    USGS Publications Warehouse

    Wise, D.R.; Johnson, H.M.

    2011-01-01

    The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.

  20. Automated Reduction and Calibration of SCUBA Archive Data Using ORAC-DR

    NASA Astrophysics Data System (ADS)

    Jenness, T.; Stevens, J. A.; Archibald, E. N.; Economou, F.; Jessop, N.; Robson, E. I.; Tilanus, R. P. J.; Holland, W. S.

    The Submillimetre Common User Bolometer Array (SCUBA) instrument has been operating on the James Clerk Maxwell Telescope (JCMT) since 1997. The data archive is now sufficiently large that it can be used for investigating instrumental properties and the variability of astronomical sources. This paper describes the automated calibration and reduction scheme used to process the archive data with particular emphasis on the pointing observations. This is made possible by using the ORAC-DR data reduction pipeline, a flexible and extensible data reduction pipeline that is used on UKIRT and the JCMT.

  1. A numerical method for solving systems of linear ordinary differential equations with rapidly oscillating solutions

    NASA Technical Reports Server (NTRS)

    Bernstein, Ira B.; Brookshaw, Leigh; Fox, Peter A.

    1992-01-01

    The present numerical method for accurate and efficient solution of systems of linear equations proceeds by numerically developing a set of basis solutions characterized by slowly varying dependent variables. The solutions thus obtained are shown to have a computational overhead largely independent of the small size of the scale length which characterizes the solutions; in many cases, the technique obviates series solutions near singular points, and its known sources of error can be easily controlled without a substantial increase in computational time.

  2. Radio variability in complete samples of extragalactic radio sources at 1.4 GHz

    NASA Astrophysics Data System (ADS)

    Rys, S.; Machalski, J.

    1990-09-01

    Complete samples of extragalactic radio sources obtained in 1970-1975 and the sky survey of Condon and Broderick (1983) were used to select sources variable at 1.4 GHz, and to investigate the characteristics of variability in the whole population of sources at this frequency. The radio structures, radio spectral types, and optical identifications of the selected variables are discussed. Only compact flat-spectrum sources vary at 1.4 GHz, and all but four are identified with QSOs, BL Lacs, or other (unconfirmed spectroscopically) stellar objects. No correlation of degree of variability at 1.4 GHz with Galactic latitude or variability at 408 MHz has been found, suggesting that most of the 1.4-GHz variability is intrinsic and not caused by refractive scintillations. Numerical models of the variability have been computed.

  3. An update of the Pb isotope inventory in post leaded-petrol Singapore environments.

    PubMed

    Carrasco, Gonzalo; Chen, Mengli; Boyle, Edward A; Tanzil, Jani; Zhou, Kuanbo; Goodkin, Nathalie F

    2018-02-01

    Pb is a trace metal that tracks anthropogenic pollution in natural environments. Despite recent leaded petrol phase out around Southeast Asia, the region's growth has resulted in continued exposure of Pb from a variety of sources. In this study, sources of Pb into Singapore, a highly urbanised city-state situated in the central axis of Southeast Asia, are investigated using isotopic ratios and concentrations. We compiled data from our previous analyses of aerosols, incineration fly ash and sediments, with new data from analyses of soil from gas stations, water from runoff and round-island coastal seawater to obtain a spatio-temporal overview of sources of Pb into the Singapore environment. Using 206 Pb/ 207 Pb ratio, we identified three main Pb source origins: natural Pb (1.215 ± 0.001), historic/remnant leaded petrol (1.123 ± 0.013), and present-day industrial and incinerated waste (1.148 ± 0.005). Deep reservoir sediments bore larger traces of Pb from leaded petrol, but present-day runoff waters and coastal seawater were a mix of industrial and natural sources with somewhat variable concentrations. We found temporal variability in Pb isotopic ratio in aerosols indicating alternating transboundary Pb sources to Singapore that correspond to seasonal changes in monsoon winds. By contrast, seasonal monsoon circulation did not significantly influence isotopic ratios of coastal seawater Pb. Instead, seawater Pb was driven more by location differences, suggesting stronger local-scale drivers of Pb such as point sources, water flushing, and isotope exchange. The combination of multiple historic and current sources of Pb shown in this study highlights the need for continued monitoring of Pb in Southeast Asia, especially in light of emerging industries and potential large sources of Pb such as coal combustion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Multi-Disciplinary Approach to Trace Contamination of Streams and Beaches

    USGS Publications Warehouse

    Nickles, James

    2008-01-01

    Concentrations of fecal-indicator bacteria in urban streams and ocean beaches in and around Santa Barbara occasionally can exceed public-health standards for recreation. The U.S. Geological Survey (USGS), working with the City of Santa Barbara, has used multi-disciplinary science to trace the sources of the bacteria. This research is helping local agencies take steps to improve recreational water quality. The USGS used an approach that combined traditional hydrologic and microbiological data, with state-of-the-art genetic, molecular, and chemical tracer analysis. This research integrated physical data on streamflow, ground water, and near-shore oceanography, and made extensive use of modern geophysical and isotopic techniques. Using those techniques, the USGS was able to evaluate the movement of water and the exchange of ground water with near-shore ocean water. The USGS has found that most fecal bacteria in the urban streams came from storm-drain discharges, with the highest concentrations occurring during storm flow. During low streamflow, the concentrations varied as much as three-fold, owing to variable contribution of non-point sources such as outdoor water use and urban runoff to streamflow. Fecal indicator bacteria along ocean beaches were from both stream discharge to the ocean and from non-point sources such as bird fecal material that accumulates in kelp and sand at the high-tide line. Low levels of human-specific Bacteroides, suggesting fecal material from a human source, were consistently detected on area beaches. One potential source, a local sewer line buried beneath the beach, was found not to be responsible for the fecal bacteria.

  5. Uncovering extreme AGN variability in serendipitous X-ray source surveys

    NASA Astrophysics Data System (ADS)

    Moran, Edward C.; Garcia Soto, Aylin; LaMassa, Stephanie; Urry, Meg

    2018-01-01

    Constraints on the duty cycle and duration of accretion episodes in active galactic nuclei (AGNs) are vital for establishing how most AGNs are fueled, which is essential for a complete picture of black hole/galaxy co-evolution. Perhaps the best handle we have on these activity parameters is provided by AGNs that have displayed dramatic changes in their bolometric luminosities and, in some cases, spectroscopic classifications. Given that X-ray emission is directly linked to black-hole accretion, X-ray surveys should provide a straightforward means of identifying AGNs that have undergone dramatic changes in their accretion states. However, it appears that such events are very rare, so wide-area surveys separated in time by many years are needed to maximize discovery rates. We have cross-correlated the Einstein IPC Two-Sigma Catalog with the ROSAT All-Sky Survey Faint Source Catalog to identify a sample of soft X-ray sources that varied by factors ranging from 7 to more than 100 over a ten year timescale. When possible, we have constructed long-term X-ray light curves for the sources by combining the Einstein and RASS fluxes with those obtained from serendipitous pointed observations by ROSAT, Chandra,XMM, and Swift. Optical follow-up observations indicate that many of the extremely variable sources in our sample are indeed radio-quiet AGNs. Interestingly, the majority of objects that dimmed between ~1980 and ~1990 are still (or are again) broad-line AGNs rather than“changing-look” candidates that have more subtle AGN signatures in their spectra — despite the fact that none of the sources examined thus far has returned to its highest observed luminosity. Future X-ray observations will provide the opportunity to characterize the X-ray behavior of these anonymous, extreme AGNs over a four decade span.

  6. Spatial variability of mercury wet deposition in eastern Ohio: summertime meteorological case study analysis of local source influences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emily M. White; Gerald J. Keeler; Matthew S. Landis

    2009-07-01

    Extensive exploration of event precipitation data in the Ohio River Valley indicates that coal combustion emissions play an important role in mercury (Hg) wet deposition. During July-September 2006, an intensive study was undertaken to discern the degree of local source influence. Source-receptor relationships were explored by establishing a set of wet deposition sites in and around Steubenville, Ohio. For the three month period of study, volume-weighted mean Hg concentrations observed at the eight sites ranged from 10.2 to 22.3 ng L{sup -1}, but this range increased drastically on an event basis with a maximum concentration of 89.4 ng L{sup -1}more » and a minimum concentration of 4.1 ng L{sup -1}. A subset of events was explored in depth, and the degree of variability in Hg concentrations between sites was linked to the degree of local source enhancement. Samples collected at sites less than 1 km from coal-fired utility stacks (near-field) exhibited up to 72% enhancement in Hg concentrations over regionally representative samples on an event basis. Air mass transport and precipitating cell histories were traced in order to evaluate relationships between local point sources and receptor sites. It was found that the interaction of several dynamic atmospheric parameters combined to favor local Hg concentration enhancement over the more regional contribution. When significant meteorological factors (wind speed at time of maximum rain rate, wind speed 24 h prior to precipitation, mixing height, and observed ceiling) were explored, it was estimated that during summertime precipitation, 42% of Hg concentration in near-field samples could be attributed to the adjacent coal-fired utility source. 28 refs., 3 figs., 2 tabs.« less

  7. Volume 2 - Point Sources

    EPA Pesticide Factsheets

    Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr

  8. Optimal pupil design for confocal microscopy

    NASA Astrophysics Data System (ADS)

    Patel, Yogesh G.; Rajadhyaksha, Milind; DiMarzio, Charles A.

    2010-02-01

    Confocal reflectance microscopy may enable screening and diagnosis of skin cancers noninvasively and in real-time, as an adjunct to biopsy and pathology. Current instruments are large, complex, and expensive. A simpler, confocal line-scanning microscope may accelerate the translation of confocal microscopy in clinical and surgical dermatology. A confocal reflectance microscope may use a beamsplitter, transmitting and detecting through the pupil, or a divided pupil, or theta configuration, with half used for transmission and half for detection. The divided pupil may offer better sectioning and contrast. We present a Fourier optics model and compare the on-axis irradiance of a confocal point-scanning microscope in both pupil configurations, optimizing the profile of a Gaussian beam in a circular or semicircular aperture. We repeat both calculations with a cylindrical lens which focuses the source to a line. The variable parameter is the fillfactor, h, the ratio of the 1/e2 diameter of the Gaussian beam to the diameter of the full aperture. The optimal values of h, for point scanning are 0.90 (full) and 0.66 for the half-aperture. For line-scanning, the fill-factors are 1.02 (full) and 0.52 (half). Additional parameters to consider are the optimal location of the point-source beam in the divided-pupil configuration, the optimal line width for the line-source, and the width of the aperture in the divided-pupil configuration. Additional figures of merit are field-of-view and sectioning. Use of optimal designs is critical in comparing the experimental performance of the different configurations.

  9. The Ionization Source in the Nucleus of M84

    NASA Technical Reports Server (NTRS)

    Bower, G. A.; Green, R. F.; Quillen, A. C.; Danks, A.; Malumuth, E. M.; Gull, T.; Woodgate, B.; Hutchings, J.; Joseph, C.; Kaiser, M. E.

    2000-01-01

    We have obtained new Hubble Space Telescope (HST) observations of M84, a nearby massive elliptical galaxy whose nucleus contains a approximately 1.5 X 10(exp 9) solar mass dark compact object, which presumably is a supermassive black hole. Our Space Telescope Imaging Spectrograph (STIS) spectrum provides the first clear detection of emission lines in the blue (e.g., [0 II] lambda 3727, HBeta and [0 III] lambda lambda4959,5007), which arise from a compact region approximately 0".28 across centered on the nucleus. Our Near Infrared Camera and MultiObject Spectrometer (NICMOS) images exhibit the best view through the prominent dust lanes evident at optical wavelengths and provide a more accurate correction for the internal extinction. The relative fluxes of the emission lines we have detected in the blue together with those detected in the wavelength range 6295 - 6867 A by Bower et al. indicate that the gas at the nucleus is photoionized by a nonstellar process, instead of hot stars. Stellar absorption features from cool stars at the nucleus are very weak. We update the spectral energy distribution of the nuclear point source and find that although it is roughly flat in most bands, the optical to UV continuum is very red, similar to the spectral energy distribution of BL Lac. Thus, the nuclear point source seen in high-resolution optical images is not a star cluster but is instead a nonstellar source. Assuming isotropic emission from this source, we estimate that the ratio of bolometric luminosity to Eddington luminosity is about 5 x 10(exp -7). However, this could be underestimated if this source is a misaligned BL Lac object, which is a possibility suggested by the spectral energy distribution and the evidence of optical variability we describe.

  10. Using CSLD Method to Calculate COD Pollution Load of Wei River Watershed above Huaxian Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2017-12-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.

  11. Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Song, JinXi; Liu, WanQing

    2018-02-01

    Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.

  12. Adjusted variable plots for Cox's proportional hazards regression model.

    PubMed

    Hall, C B; Zeger, S L; Bandeen-Roche, K J

    1996-01-01

    Adjusted variable plots are useful in linear regression for outlier detection and for qualitative evaluation of the fit of a model. In this paper, we extend adjusted variable plots to Cox's proportional hazards model for possibly censored survival data. We propose three different plots: a risk level adjusted variable (RLAV) plot in which each observation in each risk set appears, a subject level adjusted variable (SLAV) plot in which each subject is represented by one point, and an event level adjusted variable (ELAV) plot in which the entire risk set at each failure event is represented by a single point. The latter two plots are derived from the RLAV by combining multiple points. In each point, the regression coefficient and standard error from a Cox proportional hazards regression is obtained by a simple linear regression through the origin fit to the coordinates of the pictured points. The plots are illustrated with a reanalysis of a dataset of 65 patients with multiple myeloma.

  13. The Stochastic X-Ray Variability of the Accreting Millisecond Pulsar MAXI J0911-655

    NASA Technical Reports Server (NTRS)

    Bult, Peter

    2017-01-01

    In this work, I report on the stochastic X-ray variability of the 340 hertz accreting millisecond pulsar MAXI J0911-655. Analyzing pointed observations of the XMM-Newton and NuSTAR observatories, I find that the source shows broad band-limited stochastic variability in the 0.01-10 hertz range with a total fractional variability of approximately 24 percent root mean square timing residuals in the 0.4 to 3 kiloelectronvolt energy band that increases to approximately 40 percent root mean square timing residuals in the 3 to 10 kiloelectronvolt band. Additionally, a pair of harmonically related quasi-periodic oscillations (QPOs) are discovered. The fundamental frequency of this harmonic pair is observed between frequencies of 62 and 146 megahertz. Like the band-limited noise, the amplitudes of the QPOs show a steep increase as a function of energy; this suggests that they share a similar origin, likely the inner accretion flow. Based on their energy dependence and frequency relation with respect to the noise terms, the QPOs are identified as low-frequency oscillations and discussed in terms of the Lense-Thirring precession model.

  14. Resolving key drivers of variability through an important circulation choke point in the western Mediterranean Sea; using gliders, models & satellite remote sensing

    NASA Astrophysics Data System (ADS)

    Heslop, Emma; Aguiar, Eva; Mourre, Baptiste; Juza, Mélanie; Escudier, Romain; Tintoré, Joaquín

    2017-04-01

    The Ibiza Channel plays an important role in the circulation of the Western Mediterranean Sea, it governs the north/south exchange of different water masses that are known to affect regional ecosystems and is influenced by variability in the different drivers that affect sub-basins to the north (N) and south (S). A complex system. In this study we use a multi-platform approach to resolve the key drivers of this variability, and gain insight into the inter-connection between the N and S of the Western Mediterranean Sea through this choke point. The 6-year glider time series from the quasi-continuous glider endurance line monitoring of the Ibiza Channel, undertaken by SOCIB (Balearic Coastal Ocean observing and Forecasting System), is used as the base from which to identify key sub-seasonal to inter-annual patterns and shifts in water mass properties and transport volumes. The glider data indicates the following key components in the variability of the N/S flow of different water mass through the channel; regional winter mode water production, change in intermediate water mass properties, northward flows of a fresher water mass and the basin-scale circulation. To resolve the drivers of these components of variability, the strength of combining datasets from different sources, glider, modeling, altimetry and moorings, is harnessed. To the north atmospheric forcing in the Gulf of Lions is a dominant driver, while to the south the mesoscale circulation patterns of the Atlantic Jet and Alboran gyres dominate the variability but do not appear to influence the fresher inflows. Evidence of a connection between the northern and southern sub-basins is however indicated. The study highlights importance of sub-seasonal variability and the scale of rapid change possible in the Mediterranean, as well as the benefits of leveraging high resolution glider datasets within a multi-platform and modelling study.

  15. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    NASA Astrophysics Data System (ADS)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  16. A candidate framework for PM2.5 source identification in highly industrialized urban-coastal areas

    NASA Astrophysics Data System (ADS)

    Mateus, Vinícius Lionel; Gioda, Adriana

    2017-09-01

    The variability of PM sources and composition impose tremendous challenges for police makers in order to establish guidelines. In urban PM, sources associated with industrial processes are among the most important ones. In this study, a 5-year monitoring of PM2.5 samples was carried out in an industrial district. Their chemical composition was strategically determined in two campaigns in order to check the effectiveness of mitigation policies. Gaseous pollutants (NO2, SO2, and O3) were also monitored along with meteorological variables. The new method called Conditional Bivariate Probability Function (CBPF) was successfully applied to allocate the observed concentration of criteria pollutants (gaseous pollutants and PM2.5) in cells defined by wind direction-speed which provided insights about ground-level and elevated pollution plumes. CBPF findings were confirmed by the Theil-Sen long trend estimations for criteria pollutants. By means of CBPF, elevated pollution plumes were detected in the range of 0.54-5.8 μg m-3 coming from a direction associated to stacks. With high interpretability, the use of Conditional Inference Trees (CIT) provided both classification and regression of the speciated PM2.5 in the two campaigns. The combination of CIT and Random Forests (RF) point out NO3- and Ca+2 as important predictors for PM2.5. The latter predictor mostly associated to non-sea-salt sources, given a nss-Ca2+ contribution equal to 96%.

  17. X-ray Flaring Activity in HBL Source PKS 2155-304

    NASA Astrophysics Data System (ADS)

    Kapanadze, Bidzina

    2013-08-01

    We report an increasing X-ray flux through 0.3-10 keV band in the high-energy peaked BL Lacertae source PKS 2155-304 (z=0.117) which has been observed three times between 2013 July 25 and August 3 with the X-ray Telescope (XRT) onboard the Swift satellite. Using the data provided at the website http://www.swift.psu.edu/monitoring/ we have found that the object increased its 0.3-10 keV flux almost 3-times from 0.98+/-0.06 cts/s (July 25, ObsID=00030795114) to 2.85+/-0.08 cts/s corresponding to the observation performed July 31. The last pointing performed on August 3 (ObsID0008028002) shows even higher flux of 3.08+/-05 cts/s. No subhour flux variability at 99.9% confidence are detected from each observation, lasting 0.7 ks - 2.1 ks. On the basis of our recent study of long-term X-ray flux variability in this source (Kapanadze et al. 2013, submitted to the Monthly Notices of Royal Astronomical Society) we suggest that the similar situation was generally an indicator of the! onset of a longer-term flare with weeks-months duration. Therefore, further densely sampled observations with Swift-XRT and other X-ray instruments are highly recommended. Since X-ray flares in BL Lacertae sources are mostly followed by those in other spectral bands, we encourage intensive multiwavelength observations of PKS 2155-304.

  18. Influence of developmental stage, salts and food presence on various end points using Caenorhabditis elegans for aquatic toxicity testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donkin, S.G.; Williams, P.L.

    1995-12-01

    This study used a randomized block design to investigate the importance of several variables in using the free-living soil nematode, Caenorhabditis elegans, for aquatic toxicity testing. Concentration-response data were obtained on nematodes of various developmental stages exposed to four metals (Cd, Pb, Cu, and Hg) and a water-soluble organic toxicant, sodium pentachlorophenate (PCP), under conditions of varied solvent medium (with or without salts and with or without a bacterial food source). The end points measured were 24- and 96-h mortality LC50 value, as well as development of larval stages to adulthood and evidence of reproduction. The results suggest that nematodesmore » of various ages respond similarity to a given toxicant for all end points measured, although adults cultured from eggs appeared more sensitive than adults cultured from dauer larvae. The most important environmental variable in determining toxicity was the medium in which the tests were conducted. The presence of potassium and sodium salts in the medium significantly (p < 0.05) reduced the toxicity of many test samples. The presence of bacteria had little effect on 24-h tests with salts, but was important in 96-h survival and development. Based on sensitivity and ease of handling, adults cultured from eggs are recommended in both 24h and 96-h tests.« less

  19. Camera system considerations for geomorphic applications of SfM photogrammetry

    USGS Publications Warehouse

    Mosbrucker, Adam; Major, Jon J.; Spicer, Kurt R.; Pitlick, John

    2017-01-01

    The availability of high-resolution, multi-temporal, remotely sensed topographic data is revolutionizing geomorphic analysis. Three-dimensional topographic point measurements acquired from structure-from-motion (SfM) photogrammetry have been shown to be highly accurate and cost-effective compared to laser-based alternatives in some environments. Use of consumer-grade digital cameras to generate terrain models and derivatives is becoming prevalent within the geomorphic community despite the details of these instruments being largely overlooked in current SfM literature. This article is protected by copyright. All rights reserved.A practical discussion of camera system selection, configuration, and image acquisition is presented. The hypothesis that optimizing source imagery can increase digital terrain model (DTM) accuracy is tested by evaluating accuracies of four SfM datasets conducted over multiple years of a gravel bed river floodplain using independent ground check points with the purpose of comparing morphological sediment budgets computed from SfM- and lidar-derived DTMs. Case study results are compared to existing SfM validation studies in an attempt to deconstruct the principle components of an SfM error budget. This article is protected by copyright. All rights reserved.Greater information capacity of source imagery was found to increase pixel matching quality, which produced 8 times greater point density and 6 times greater accuracy. When propagated through volumetric change analysis, individual DTM accuracy (6–37 cm) was sufficient to detect moderate geomorphic change (order 100,000 m3) on an unvegetated fluvial surface; change detection determined from repeat lidar and SfM surveys differed by about 10%. Simple camera selection criteria increased accuracy by 64%; configuration settings or image post-processing techniques increased point density by 5–25% and decreased processing time by 10–30%. This article is protected by copyright. All rights reserved.Regression analysis of 67 reviewed datasets revealed that the best explanatory variable to predict accuracy of SfM data is photographic scale. Despite the prevalent use of object distance ratios to describe scale, nominal ground sample distance is shown to be a superior metric, explaining 68% of the variability in mean absolute vertical error.

  20. Evolution of air pollution source contributions over one decade, derived by PM10 and PM2.5 source apportionment in two metropolitan urban areas in Greece

    NASA Astrophysics Data System (ADS)

    Diapouli, E.; Manousakas, M.; Vratolis, S.; Vasilatou, V.; Maggos, Th; Saraga, D.; Grigoratos, Th; Argyropoulos, G.; Voutsa, D.; Samara, C.; Eleftheriadis, K.

    2017-09-01

    Metropolitan Urban areas in Greece have been known to suffer from poor air quality, due to variety of emission sources, topography and climatic conditions favouring the accumulation of pollution. While a number of control measures have been implemented since the 1990s, resulting in reductions of atmospheric pollution and changes in emission source contributions, the financial crisis which started in 2009 has significantly altered this picture. The present study is the first effort to assess the contribution of emission sources to PM10 and PM2.5 concentration levels and their long-term variability (over 5-10 years), in the two largest metropolitan urban areas in Greece (Athens and Thessaloniki). Intensive measurement campaigns were conducted during 2011-2012 at suburban, urban background and urban traffic sites in these two cities. In addition, available datasets from previous measurements in Athens and Thessaloniki were used in order to assess the long-term variability of concentrations and sources. Chemical composition analysis of the 2011-2012 samples showed that carbonaceous matter was the most abundant component for both PM size fractions. Significant increase of carbonaceous particle concentrations and of OC/EC ratio during the cold period, especially in the residential urban background sites, pointed towards domestic heating and more particularly wood (biomass) burning as a significant source. PMF analysis further supported this finding. Biomass burning was the largest contributing source at the two urban background sites (with mean contributions for the two size fractions in the range of 24-46%). Secondary aerosol formation (sulphate, nitrate & organics) was also a major contributing source for both size fractions at the suburban and urban background sites. At the urban traffic site, vehicular traffic (exhaust and non-exhaust emissions) was the source with the highest contributions, accounting for 44% of PM10 and 37% of PM2.5, respectively. The long-term variability of emission sources in the two cities (over 5-10 years), assessed through a harmonized application of the PMF technique on recent and past year data, clearly demonstrates the effective reduction in emissions during the last decade due to control measures and technological development; however, it also reflects the effects of the financial crisis in Greece during these years, which has led to decreased economic activities and the adoption of more polluting practices by the local population in an effort to reduce living costs.

  1. Modeling Sea-Level Change using Errors-in-Variables Integrated Gaussian Processes

    NASA Astrophysics Data System (ADS)

    Cahill, Niamh; Parnell, Andrew; Kemp, Andrew; Horton, Benjamin

    2014-05-01

    We perform Bayesian inference on historical and late Holocene (last 2000 years) rates of sea-level change. The data that form the input to our model are tide-gauge measurements and proxy reconstructions from cores of coastal sediment. To accurately estimate rates of sea-level change and reliably compare tide-gauge compilations with proxy reconstructions it is necessary to account for the uncertainties that characterize each dataset. Many previous studies used simple linear regression models (most commonly polynomial regression) resulting in overly precise rate estimates. The model we propose uses an integrated Gaussian process approach, where a Gaussian process prior is placed on the rate of sea-level change and the data itself is modeled as the integral of this rate process. The non-parametric Gaussian process model is known to be well suited to modeling time series data. The advantage of using an integrated Gaussian process is that it allows for the direct estimation of the derivative of a one dimensional curve. The derivative at a particular time point will be representative of the rate of sea level change at that time point. The tide gauge and proxy data are complicated by multiple sources of uncertainty, some of which arise as part of the data collection exercise. Most notably, the proxy reconstructions include temporal uncertainty from dating of the sediment core using techniques such as radiocarbon. As a result of this, the integrated Gaussian process model is set in an errors-in-variables (EIV) framework so as to take account of this temporal uncertainty. The data must be corrected for land-level change known as glacio-isostatic adjustment (GIA) as it is important to isolate the climate-related sea-level signal. The correction for GIA introduces covariance between individual age and sea level observations into the model. The proposed integrated Gaussian process model allows for the estimation of instantaneous rates of sea-level change and accounts for all available sources of uncertainty in tide-gauge and proxy-reconstruction data. Our response variable is sea level after correction for GIA. By embedding the integrated process in an errors-in-variables (EIV) framework, and removing the estimate of GIA, we can quantify rates with better estimates of uncertainty than previously possible. The model provides a flexible fit and enables us to estimate rates of change at any given time point, thus observing how rates have been evolving from the past to present day.

  2. Determinants of urban sprawl in European cities

    PubMed Central

    Alvanides, Seraphim; Garrod, Guy

    2015-01-01

    This paper provides empirical evidence that helps to answer several key questions relating to the extent of urban sprawl in Europe. Building on the monocentric city model, this study uses existing data sources to derive a set of panel data for 282 European cities at three time points (1990, 2000 and 2006). Two indices of urban sprawl are calculated that, respectively, reflect changes in artificial area and the levels of urban fragmentation for each city. These are supplemented by a set of data on various economic and geographical variables that might explain the variation of the two indices. Using a Hausman-Taylor estimator and random regressors to control for the possible correlation between explanatory variables and unobservable city-level effects, we find that the fundamental conclusions of the standard monocentric model are valid in the European context for both indices. Although the variables generated by the monocentric model explain a large part of the variation of artificial area, their explanatory power for modelling the fragmentation index is relatively low. PMID:26321770

  3. Determinants of urban sprawl in European cities.

    PubMed

    Oueslati, Walid; Alvanides, Seraphim; Garrod, Guy

    2015-07-01

    This paper provides empirical evidence that helps to answer several key questions relating to the extent of urban sprawl in Europe. Building on the monocentric city model, this study uses existing data sources to derive a set of panel data for 282 European cities at three time points (1990, 2000 and 2006). Two indices of urban sprawl are calculated that, respectively, reflect changes in artificial area and the levels of urban fragmentation for each city. These are supplemented by a set of data on various economic and geographical variables that might explain the variation of the two indices. Using a Hausman-Taylor estimator and random regressors to control for the possible correlation between explanatory variables and unobservable city-level effects, we find that the fundamental conclusions of the standard monocentric model are valid in the European context for both indices. Although the variables generated by the monocentric model explain a large part of the variation of artificial area, their explanatory power for modelling the fragmentation index is relatively low.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, J.D.; Woan, G.

    Data from the Laser Interferometer Space Antenna (LISA) is expected to be dominated by frequency noise from its lasers. However, the noise from any one laser appears more than once in the data and there are combinations of the data that are insensitive to this noise. These combinations, called time delay interferometry (TDI) variables, have received careful study and point the way to how LISA data analysis may be performed. Here we approach the problem from the direction of statistical inference, and show that these variables are a direct consequence of a principal component analysis of the problem. We presentmore » a formal analysis for a simple LISA model and show that there are eigenvectors of the noise covariance matrix that do not depend on laser frequency noise. Importantly, these orthogonal basis vectors correspond to linear combinations of TDI variables. As a result we show that the likelihood function for source parameters using LISA data can be based on TDI combinations of the data without loss of information.« less

  5. Large-scale variability of wind erosion mass flux rates at Owens Lake 1. Vertical profiles of horizontal mass fluxes of wind-eroded particles with diameter greater than 50 μm

    USGS Publications Warehouse

    Gillette, Dale A.; Fryrear, D.W.; Xiao, Jing Bing; Stockton, Paul; Ono, Duane; Helm, Paula J.; Gill, Thomas E; Ley, Trevor

    1997-01-01

    A field experiment at Owens (dry) Lake, California, tested whether and how the relative profiles of airborne horizontal mass fluxes for >50-μm wind-eroded particles changed with friction velocity. The horizontal mass flux at almost all measured heights increased proportionally to the cube of friction velocity above an apparent threshold friction velocity for all sediment tested and increased with height except at one coarse-sand site where the relative horizontal mass flux profile did not change with friction velocity. Size distributions for long-time-averaged horizontal mass flux samples showed a saltation layer from the surface to a height between 30 and 50 cm, above which suspended particles dominate. Measurements from a large dust source area on a line parallel to the wind showed that even though the saltation flux reached equilibrium ∼650 m downwind of the starting point of erosion, weakly suspended particles were still input into the atmosphere 1567 m downwind of the starting point; thus the saltating fraction of the total mass flux decreased after 650 m. The scale length difference and ratio of 70/30 suspended mass flux to saltation mass flux at the farthest down wind sampling site confirm that suspended particles are very important for mass budgets in large source areas and that saltation mass flux can be a variable fraction of total horizontal mass flux for soils with a substantial fraction of <100-μm particles.

  6. Steps toward identifying a biogeochemical signal in non-equilibrium methane clumped isotope measurements

    NASA Astrophysics Data System (ADS)

    Douglas, P. M.; Eiler, J. M.; Sessions, A. L.; Dawson, K.; Walter Anthony, K. M.; Smith, D. A.; Lloyd, M. K.; Yanay, E.

    2016-12-01

    Microbially produced methane is a globally important greenhouse gas, energy source, and biological substrate. Methane clumped isotope measurements have recently been developed as a new analytical tool for understanding the source of methane in different environments. When methane forms in isotopic equilibrium clumped isotope values are determined by formation temperature, but in many cases microbial methane clumped isotope values deviate strongly from expected equilibrium values. Indeed, we observe a very wide range of clumped isotope values in microbial methane, which are likely strongly influenced by kinetic isotope effects, but thus far the biological and environmental parameters controlling this variability are not understood. We will present data from both culture experiments and natural environments to explore patterns of variability in non-equilibrium clumped isotope values on temporal and spatial scales. In methanogen batch cultures sampled at different time points along a growth curve we observe significant variability in clumped isotope values, with values decreasing from early to late exponential growth. Clumped isotope values then increase during stationary growth. This result is consistent with previous work suggesting that differences in the reversibility of methanogenesis related to metabolic rates control non-equilibrium clumped isotope values. Within single lakes in Alaska and Sweden we observe substantial variability in clumped isotope values on the order of 5‰. Lower clumped isotope values are associated with larger 2H isotopic fractionation between water and methane, which is also consistent with a kinetic isotope effect determined by the reversibility of methanogenesis. Finally, we analyzed a time-series clumped isotope compositions of methane emitted from two seeps in an Alaskan lake over several months. Temporal variability in these seeps is on the order of 2‰, which is much less than the observed spatial variability within the lake. Comparing carbon isotope fractionation between CO2 and CH4 with clumped isotope data suggests the temporal variability may result from changes in methane oxidation.

  7. Patient position alters attenuation effects in multipinhole cardiac SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timmins, Rachel; Ruddy, Terrence D.; Wells, R. Glenn, E-mail: gwells@ottawaheart.ca

    2015-03-15

    Purpose: Dedicated cardiac cameras offer improved sensitivity over conventional SPECT cameras. Sensitivity gains are obtained by large numbers of detectors and novel collimator arrangements such as an array of multiple pinholes that focus on the heart. Pinholes lead to variable amounts of attenuation as a source is moved within the camera field of view. This study evaluated the effects of this variable attenuation on myocardial SPECT images. Methods: Computer simulations were performed for a set of nine point sources distributed in the left ventricular wall (LV). Sources were placed at the location of the heart in both an anthropomorphic andmore » a water-cylinder computer phantom. Sources were translated in x, y, and z by up to 5 cm from the center. Projections were simulated with and without attenuation and the changes in attenuation were compared. A LV with an inferior wall defect was also simulated in both phantoms over the same range of positions. Real camera data were acquired on a Discovery NM530c camera (GE Healthcare, Haifa, Israel) for five min in list-mode using an anthropomorphic phantom (DataSpectrum, Durham, NC) with 100 MBq of Tc-99m in the LV. Images were taken over the same range of positions as the simulations and were compared based on the summed perfusion score (SPS), defect width, and apparent defect uptake for each position. Results: Point sources in the water phantom showed absolute changes in attenuation of ≤8% over the range of positions and relative changes of ≤5% compared to the apex. In the anthropomorphic computer simulations, absolute change increased to 20%. The changes in relative attenuation caused a change in SPS of <1.5 for the water phantom but up to 4.2 in the anthropomorphic phantom. Changes were larger for axial than for transverse translations. These results were supported by SPS changes of up to six seen in the physical anthropomorphic phantom for axial translations. Defect width was also seen to significantly increase. The position-dependent changes were removed with attenuation correction. Conclusions: Translation of a source relative to a multipinhole camera caused only small changes in homogeneous phantoms with SPS changing <1.5. Inhomogeneous attenuating media cause much larger changes to occur when the source is translated. Changes in SPS of up to six were seen in an anthropomorphic phantom for axial translations. Attenuation correction removes the position-dependent changes in attenuation.« less

  8. A Causal, Data-driven Approach to Modeling the Kepler Data

    NASA Astrophysics Data System (ADS)

    Wang, Dun; Hogg, David W.; Foreman-Mackey, Daniel; Schölkopf, Bernhard

    2016-09-01

    Astronomical observations are affected by several kinds of noise, each with its own causal source; there is photon noise, stochastic source variability, and residuals coming from imperfect calibration of the detector or telescope. The precision of NASA Kepler photometry for exoplanet science—the most precise photometric measurements of stars ever made—appears to be limited by unknown or untracked variations in spacecraft pointing and temperature, and unmodeled stellar variability. Here, we present the causal pixel model (CPM) for Kepler data, a data-driven model intended to capture variability but preserve transit signals. The CPM works at the pixel level so that it can capture very fine-grained information about the variation of the spacecraft. The CPM models the systematic effects in the time series of a pixel using the pixels of many other stars and the assumption that any shared signal in these causally disconnected light curves is caused by instrumental effects. In addition, we use the target star’s future and past (autoregression). By appropriately separating, for each data point, the data into training and test sets, we ensure that information about any transit will be perfectly isolated from the model. The method has four tuning parameters—the number of predictor stars or pixels, the autoregressive window size, and two L2-regularization amplitudes for model components, which we set by cross-validation. We determine values for tuning parameters that works well for most of the stars and apply the method to a corresponding set of target stars. We find that CPM can consistently produce low-noise light curves. In this paper, we demonstrate that pixel-level de-trending is possible while retaining transit signals, and we think that methods like CPM are generally applicable and might be useful for K2, TESS, etc., where the data are not clean postage stamps like Kepler.

  9. Searching for I-band variability in stars in the M/L spectral transition region

    NASA Astrophysics Data System (ADS)

    Ramsay, Gavin; Hakala, Pasi; Doyle, J. Gerry

    2015-10-01

    We report on I-band photometric observations of 21 stars with spectral types between M8 and L4 made using the Isaac Newton Telescope. The total amount of time for observations which had a cadence of <2.3 min was 58.5 h, with additional data with lower cadence. We test for photometric variability using the Kruskal-Wallis H-test and find that four sources (2MASS J10224821+5825453, 2MASS J07464256+2000321, 2MASS J16262034+3925190 and 2MASS J12464678+4027150) were found to be significantly variable at least on one epoch. Three of these sources are reported as photometrically variable for the first time. If we include sources which were deemed marginally variable, the number of variable sources is 6 (29 per cent). No flares were detected from any source. The percentage of sources which we found were variable is similar to previous studies. We summarize the mechanisms which have been put forward to explain the light curves of brown dwarfs.

  10. Identification of Geologic and Anthropogenic Sources of Phosphorus to Streams in California and Portions of Adjacent States, U.S.A., Using SPARROW Modeling

    NASA Astrophysics Data System (ADS)

    Domagalski, J. L.

    2013-12-01

    The SPARROW (Spatially Referenced Regressions On Watershed Attributes) model allows for the simulation of nutrient transport at un-gauged catchments on a regional scale. The model was used to understand natural and anthropogenic factors affecting phosphorus transport in developed, undeveloped, and mixed watersheds. The SPARROW model is a statistical tool that allows for mass balance calculation of constituent sources, transport, and aquatic decay based upon a calibration of a subset of stream networks, where concentrations and discharge have been measured. Calibration is accomplished using potential sources for a given year and may include fertilizer, geological background (based on bed-sediment samples and aggregated with geochemical map units), point source discharge, and land use categories. NHD Plus version 2 was used to model the hydrologic system. Land to water transport variables tested were precipitation, permeability, soil type, tile drains, and irrigation. For this study area, point sources, cultivated land, and geological background are significant phosphorus sources to streams. Precipitation and clay content of soil are significant land to water transport variables and various stream sizes show significance with respect to aquatic decay. Specific rock types result in different levels of phosphorus loading and watershed yield. Some important geological sources are volcanic rocks (andesite and basalt), granodiorite, glacial deposits, and Mesozoic to Cenozoic marine deposits. Marine sediments vary in their phosphorus content, but are responsible for some of the highest natural phosphorus yields, especially along the Central and Southern California coast. The Miocene Monterey Formation was found to be an especially important local source in southern California. In contrast, mixed metamorphic and igneous assemblages such as argillites, peridotite, and shales of the Trinity Mountains of northern California result in some of the lowest phosphorus yields. The agriculturally productive Central Valley of California has a low amount of background phosphorus in spite of inputs from streams draining upland areas. Many years of intensive agriculture may be responsible for the decrease of soil phosphorus in that area. Watersheds with significant background sources of phosphorus and large amounts of cultivated land had some of the highest per hectare yields. Seven different stream systems important for water management, or to describe transport processes, were investigated in detail for downstream changes in sources and loads. For example, the Klamath River (Oregon and California) has intensive agriculture and andesite-derived phosphorus in the upper reach. The proportion of agricultural-derived phosphorus decreases as the river flows into California before discharge to the ocean. The river flows through at least three different types of geological background sources from high to intermediate to very low. Knowledge of the role of natural sources in developed watersheds is critical for developing nutrient management strategies and these model results will have applicability for the establishment of realistic nutrient criteria.

  11. Cumulative biomedical risk and social cognition in the second year of life: prediction and moderation by responsive parenting.

    PubMed

    Wade, Mark; Madigan, Sheri; Akbari, Emis; Jenkins, Jennifer M

    2015-01-01

    At 18 months, children show marked variability in their social-cognitive skill development, and the preponderance of past research has focused on constitutional and contextual factors in explaining this variability. Extending this literature, the current study examined whether cumulative biomedical risk represents another source of variability in social cognition at 18 months. Further, we aimed to determine whether responsive parenting moderated the association between biomedical risk and social cognition. A prospective community birth cohort of 501 families was recruited at the time of the child's birth. Cumulative biomedical risk was measured as a count of 10 prenatal/birth complications. Families were followed up at 18 months, at which point social-cognitive data was collected on children's joint attention, empathy, cooperation, and self-recognition using previously validated tasks. Concurrently, responsive maternal behavior was assessed through observational coding of mother-child interactions. After controlling for covariates (e.g., age, gender, child language, socioeconomic variables), both cumulative biomedical risk and maternal responsivity significantly predicted social cognition at 18 months. Above and beyond these main effects, there was also a significant interaction between biomedical risk and maternal responsivity, such that higher biomedical risk was significantly associated with compromised social cognition at 18 months, but only in children who experienced low levels of responsive parenting. For those receiving comparatively high levels of responsive parenting, there was no apparent effect of biomedical risk on social cognition. This study shows that cumulative biomedical risk may be one source of inter-individual variability in social cognition at 18 months. However, positive postnatal experiences, particularly high levels of responsive parenting, may protect children against the deleterious effects of these risks on social cognition.

  12. Long-term Variability of H2CO Masers in Star-forming Regions

    NASA Astrophysics Data System (ADS)

    Andreev, N.; Araya, E. D.; Hoffman, I. M.; Hofner, P.; Kurtz, S.; Linz, H.; Olmi, L.; Lorran-Costa, I.

    2017-10-01

    We present results of a multi-epoch monitoring program on variability of 6 cm formaldehyde (H2CO) masers in the massive star-forming region NGC 7538 IRS 1 from 2008 to 2015, conducted with the Green Bank Telescope, the Westerbork Radio Telescope , and the Very Large Array. We found that the similar variability behaviors of the two formaldehyde maser velocity components in NGC 7538 IRS 1 (which was pointed out by Araya and collaborators in 2007) have continued. The possibility that the variability is caused by changes in the maser amplification path in regions with similar morphology and kinematics is discussed. We also observed 12.2 GHz methanol and 22.2 GHz water masers toward NGC 7538 IRS 1. The brightest maser components of CH3OH and H2O species show a decrease in flux density as a function of time. The brightest H2CO maser component also shows a decrease in flux density and has a similar LSR velocity to the brightest H2O and 12.2 GHz CH3OH masers. The line parameters of radio recombination lines and the 20.17 and 20.97 GHz CH3OH transitions in NGC 7538 IRS 1 are also reported. In addition, we observed five other 6 cm formaldehyde maser regions. We found no evidence of significant variability of the 6 cm masers in these regions with respect to previous observations, the only possible exception being the maser in G29.96-0.02. All six sources were also observed in the {{{H}}}213{CO} isotopologue transition of the 6 cm H2CO line; {{{H}}}213{CO} absorption was detected in five of the sources. Estimated column density ratios [{{{H}}}212{CO}]/[{{{H}}}213{CO}] are reported.

  13. Isotopic constraints on global atmospheric methane sources and sinks: a critical assessment of recent findings and new data

    NASA Astrophysics Data System (ADS)

    Schwietzke, S.; Sherwood, O.; Michel, S. E.; Bruhwiler, L.; Dlugokencky, E. J.; Tans, P. P.

    2017-12-01

    Methane isotopic data have increasingly been used in recent studies to help constrain global atmospheric methane sources and sinks. The added scientific contributions to this field include (i) careful comparisons and merging of atmospheric isotope measurement datasets to increase spatial coverage, (ii) in-depth analyses of observed isotopic spatial gradients and seasonal patterns, and (iii) improved datasets of isotopic source signatures. Different interpretations have been made regarding the utility of the isotopic data on the diagnosis of methane sources and sinks. Some studies have found isotopic evidence of a largely microbial source causing the renewed growth in global atmospheric methane since 2007, and underestimated global fossil fuel methane emissions compared to most previous studies. However, other studies have challenged these conclusions by pointing out substantial spatial variability in isotopic source signatures as well as open questions in atmospheric sinks and biomass burning trends. This presentation will review and contrast the main arguments and evidence for the different conclusions. The analysis will distinguish among the different research objectives including (i) global methane budget source attribution in steady-state, (ii) source attribution of recent global methane trends, and (iii) identifying specific methane sources in individual plumes during field campaigns. Additional comparisons of model experiments with atmospheric measurements and updates on isotopic source signature data will complement the analysis.

  14. Interferometry with flexible point source array for measuring complex freeform surface and its design algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo

    2018-06-01

    The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.

  15. Identification of sensitive parameters in the modeling of SVOC reemission processes from soil to atmosphere.

    PubMed

    Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc

    2014-09-15

    Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Signal to noise quantification of regional climate projections

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Mote, P.

    2016-12-01

    One of the biggest challenges in interpreting climate model outputs for impacts studies and adaptation planning is understanding the sources of disagreement among models (which is often used imperfectly as a stand-in for system uncertainty). Internal variability is a primary source of uncertainty in climate projections, especially for precipitation, for which models disagree about even the sign of changes in large areas like the continental US. Taking advantage of a large initial-condition ensemble of regional climate simulations, this study quantifies the magnitude of changes forced by increasing greenhouse gas concentrations relative to internal variability. Results come from a large initial-condition ensemble of regional climate model simulations generated by weather@home, a citizen science computing platform, where the western United States climate was simulated for the recent past (1985-2014) and future (2030-2059) using a 25-km horizontal resolution regional climate model (HadRM3P) nested in global atmospheric model (HadAM3P). We quantify grid point level signal-to-noise not just in temperature and precipitation responses, but also the energy and moisture flux terms that are related to temperature and precipitation responses, to provide important insights regarding uncertainty in climate change projections at local and regional scales. These results will aid modelers in determining appropriate ensemble sizes for different climate variables and help users of climate model output with interpreting climate model projections.

  17. EVEREST: Pixel Level Decorrelation of K2 Light Curves

    NASA Astrophysics Data System (ADS)

    Luger, Rodrigo; Agol, Eric; Kruse, Ethan; Barnes, Rory; Becker, Andrew; Foreman-Mackey, Daniel; Deming, Drake

    2016-10-01

    We present EPIC Variability Extraction and Removal for Exoplanet Science Targets (EVEREST), an open-source pipeline for removing instrumental noise from K2 light curves. EVEREST employs a variant of pixel level decorrelation to remove systematics introduced by the spacecraft’s pointing error and a Gaussian process to capture astrophysical variability. We apply EVEREST to all K2 targets in campaigns 0-7, yielding light curves with precision comparable to that of the original Kepler mission for stars brighter than {K}p≈ 13, and within a factor of two of the Kepler precision for fainter targets. We perform cross-validation and transit injection and recovery tests to validate the pipeline, and compare our light curves to the other de-trended light curves available for download at the MAST High Level Science Products archive. We find that EVEREST achieves the highest average precision of any of these pipelines for unsaturated K2 stars. The improved precision of these light curves will aid in exoplanet detection and characterization, investigations of stellar variability, asteroseismology, and other photometric studies. The EVEREST pipeline can also easily be applied to future surveys, such as the TESS mission, to correct for instrumental systematics and enable the detection of low signal-to-noise transiting exoplanets. The EVEREST light curves and the source code used to generate them are freely available online.

  18. Spatial variability in levels of benzene, formaldehyde, and total benzene, toluene, ethylbenzene and xylenes in New York City: a land-use regression study.

    PubMed

    Kheirbek, Iyad; Johnson, Sarah; Ross, Zev; Pezeshki, Grant; Ito, Kazuhiko; Eisl, Holger; Matte, Thomas

    2012-07-31

    Hazardous air pollutant exposures are common in urban areas contributing to increased risk of cancer and other adverse health outcomes. While recent analyses indicate that New York City residents experience significantly higher cancer risks attributable to hazardous air pollutant exposures than the United States as a whole, limited data exist to assess intra-urban variability in air toxics exposures. To assess intra-urban spatial variability in exposures to common hazardous air pollutants, street-level air sampling for volatile organic compounds and aldehydes was conducted at 70 sites throughout New York City during the spring of 2011. Land-use regression models were developed using a subset of 59 sites and validated against the remaining 11 sites to describe the relationship between concentrations of benzene, total BTEX (benzene, toluene, ethylbenzene, xylenes) and formaldehyde to indicators of local sources, adjusting for temporal variation. Total BTEX levels exhibited the most spatial variability, followed by benzene and formaldehyde (coefficient of variation of temporally adjusted measurements of 0.57, 0.35, 0.22, respectively). Total roadway length within 100 m, traffic signal density within 400 m of monitoring sites, and an indicator of temporal variation explained 65% of the total variability in benzene while 70% of the total variability in BTEX was accounted for by traffic signal density within 450 m, density of permitted solvent-use industries within 500 m, and an indicator of temporal variation. Measures of temporal variation, traffic signal density within 400 m, road length within 100 m, and interior building area within 100 m (indicator of heating fuel combustion) predicted 83% of the total variability of formaldehyde. The models built with the modeling subset were found to predict concentrations well, predicting 62% to 68% of monitored values at validation sites. Traffic and point source emissions cause substantial variation in street-level exposures to common toxic volatile organic compounds in New York City. Land-use regression models were successfully developed for benzene, formaldehyde, and total BTEX using spatial indicators of on-road vehicle emissions and emissions from stationary sources. These estimates will improve the understanding of health effects of individual pollutants in complex urban pollutant mixtures and inform local air quality improvement efforts that reduce disparities in exposure.

  19. Ground-Motion Variability for a Strike-Slip Earthquake from Broadband Ground-Motion Simulations

    NASA Astrophysics Data System (ADS)

    Iwaki, A.; Maeda, T.; Morikawa, N.; Fujiwara, H.

    2016-12-01

    One of the important issues in seismic hazard analysis is the evaluation of ground-motion variability due to the epistemic and aleatory uncertainties in various aspects of ground-motion simulations. This study investigates the within-event ground-motion variability in broadband ground-motion simulations for strike-slip events. We conduct ground-motion simulations for a past event (2000 MW6.6 Tottori earthquake) using a set of characterized source models (e.g. Irikura and Miyake, 2011) considering aleatory variability. Broadband ground motion is computed by a hybrid approach that combines a 3D finite-difference method (> 1 s) and the stochastic Green's function method (< 1 s), using the 3D velocity model J-SHIS v2. We consider various locations of the asperities, which are defined as the regions with large slip and stress drop within the fault, and the rupture nucleation point (hypocenter). Ground motion records at 29 K-NET and KiK-net stations are used to validate our simulations. By comparing the simulated and observed ground motion, we found that the performance of the simulations is acceptable under the condition that the source parameters are poorly constrained. In addition to the observation stations, we set 318 virtual receivers with the spatial intervals of 10 km for statistical analysis of the simulated ground motion. The maximum fault-distance is 160 km. Standard deviation (SD) of the simulated acceleration response spectra (Sa, 5% damped) of RotD50 component (Boore, 2010) is investigated at each receiver. SD from 50 different patterns of asperity locations is generally smaller than 0.15 in terms of log10 (0.34 in natural log). It shows dependence on distance at periods shorter than 1 s; SD increases as the distance decreases. On the other hand, SD from 39 different hypocenter locations is again smaller than 0.15 in log10, and showed azimuthal dependence at long periods; it increases as the rupture directivity parameter Xcosθ(Somerville et al. 1997) increases at periods longer than 1 s. The characteristics of ground-motion variability inferred from simulations can provide information on variability in simulation-based seismic hazard assessment for future earthquakes. We will further investigate the variability in other source parameters; rupture velocity and short-period level.

  20. Coastal Seabed Mapping with Hyperspectral and Lidar data

    NASA Astrophysics Data System (ADS)

    Taramelli, A.; Valentini, E.; Filipponi, F.; Cappucci, S.

    2017-12-01

    A synoptic view of the coastal seascape and its dynamics needs a quantitative ability to dissect different components over the complexity of the seafloor where a mixture of geo - biological facies determines geomorphological features and their coverage. The present study uses an analytical approach that takes advantage of a multidimensional model to integrate different data sources from airborne Hyperspectral and LiDAR remote sensing and in situ measurements to detect antropogenic features and ecological `tipping points' in coastal seafloors. The proposed approach has the ability to generate coastal seabed maps using: 1) a multidimensional dataset to account for radiometric and morphological properties of waters and the seafloor; 2) a field spectral library to assimilate the high environmental variability into the multidimensional model; 3) a final classification scheme to represent the spatial gradients in the seafloor. The spatial pattern of the response to anthropogenic forcing may be indistinguishable from patterns of natural variability. It is argued that this novel approach to define tipping points following anthropogenic impacts could be most valuable in the management of natural resources and the economic development of coastal areas worldwide. Examples are reported from different sites of the Mediterranean Sea, both from Marine Protected and un-Protected Areas.

  1. Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China

    NASA Astrophysics Data System (ADS)

    Zhu, Lei; Liu, WanQing

    2018-02-01

    TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.

  2. XMM-Newton Observations of NGC 253: Resolving the Emission Components in the Disk and Nuclear Area

    NASA Technical Reports Server (NTRS)

    Pietsch, W.; Borozdin, K. N.; Branduardi-Raymont, G.; Cappi, M.; Ehle, M.; Ferrando, P.; Freyberg, M. J.; Kahn, S. M.; Ponman, T. J.; Ptak, A.

    2000-01-01

    We describe the first XMM-Newton observations of the starburst galaxy NGC 253. As known from previous X-ray observations, NGC 253 shows a mixture of extended (disk and halo) and point-source emission. The high XMM-Newton throughput allows for the first time a detailed investigation of the spatial, spectral and variability properties of these components simultaneously. We detect a bright X-ray transient approx. 70 sec SSW of the nucleus and show the spectrum and light curve of the brightest point source (approx. 30 sec S of the nucleus, most likely a black-hole X-ray binary, BHXRB). The unprecedented combination of RGS and EPIC also sheds new light on the emission of the complex nuclear region, the X-ray plume and the disk diffuse emission. In particular, EPIC images reveal that the limb-brightening of the plume is mostly seen in higher ionization emission lines, while in the lower ionization lines, and below 0.5 keV, the plume is more homo- geneously structured, pointing to new interpretations as to the make up of the starburst-driven outflow. Assuming that type IIa supernova remnants (SNRs) are mostly responsible for the E greater than 4 keV emission, the detection with EPIC of the 6.7 keV line allows us to estimate a supernova rate within the nuclear starburst of 0.2 /yr.

  3. GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, M.

    1959-06-01

    GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less

  4. The Importance of Rotational Time-scales in Accretion Variability

    NASA Astrophysics Data System (ADS)

    Costigan, Gráinne; Vink, Joirck; Scholz, Aleks; Testi, Leonardo; Ray, Tom

    2013-07-01

    For the first few million years, one of the dominant sources of emission from a low mass young stellar object is from accretion. This process regulates the flow of material and angular moments from the surroundings to the central object, and is thought to play an important role in the definition of the long term stellar properties. Variability is a well documented attribute of accretion, and has been observed on time-scales of from days to years. However, where these variations come from is not clear. Th current model for accretion is magnetospheric accretion, where the stellar magnetic field truncates the disc, allowing the matter to flow from the disc onto the surface of the star. This model allows for variations in the accretion rate to come from many different sources, such as the magnetic field, the circumstellar disc and the interaction of the different parts of the system. We have been studying unbiased samples of accretors in order to identify the dominant time-scales and typical magnitudes of variations. In this way different sources of variations can be excluded and any missing physics in these systems identified. Through our previous work with the Long-term Accretion Monitoring Program (LAMP), we found 10 accretors in the ChaI region, whose variability is dominated by short term variations of 2 weeks. This was the shortest time period between spectroscopic observations which spanned 15 months, and rules out large scale processes in the disk as origins of this variability. On the basis of this study we have gone further to study the accretion signature H-alpha, over the time-scales of minutes and days in a set of Herbig Ae and T Tauri stars. Using the same methods as we used in LAMP we found the dominant time-scales of variations to be days. These samples both point towards rotation period of these objects as being an important time-scale for accretion variations. This allows us to indicate which are the most likely sources of these variations.

  5. Occurrence of dissolved solids, nutrients, atrazine, and fecal coliform bacteria during low flow in the Cheney Reservoir watershed, south-central Kansas, 1996

    USGS Publications Warehouse

    Christensen, V.G.; Pope, L.M.

    1997-01-01

    A network of 34 stream sampling sites was established in the 1,005-square-mile Cheney Reservoir watershed, south-central Kansas, to evaluate spatial variability in concentrations of selected water-quality constituents during low flow. Land use in the Cheney Reservoir watershed is almost entirely agricultural, consisting of pasture and cropland. Cheney Reservoir provides 40 to 60 percent of the water needs for the city of Wichita, Kansas. Sampling sites were selected to determine the relative contribution of point and nonpoint sources of water-quality constituents to streams in the watershed and to identify areas of potential water-quality concern. Water-quality constituents of interest included dissolved solids and major ions, nitrogen and phosphorus nutrients, atrazine, and fecal coliform bacteria. Water from the 34 sampling sites was sampled once in June and once in September 1996 during Phase I of a two-phase study to evaluate water-quality constituent concentrations and loading characteristics in selected subbasins within the watershed and into and out of Cheney Reservoir. Information summarized in this report pertains to Phase I and was used in the selection of six long-term monitoring sites for Phase II of the study. The average low-flow constituent concentrations in water collected during Phase I from all sampling sites was 671 milligrams per liter for dissolved solids, 0.09 milligram per liter for dissolved ammonia as nitrogen, 0.85 milligram per liter for dissolved nitrite plus nitrate as nitrogen, 0.19 milligram per liter for total phosphorus, 0.20 microgram per liter for dissolved atrazine, and 543 colonies per 100 milliliters of water for fecal coliform bacteria. Generally, these constituents were of nonpoint-source origin and, with the exception of dissolved solids, probably were related to agricultural activities. Dissolved solids probably occur naturally as the result of the dissolution of rocks and ancient marine sediments containing large salt deposits. Nutrients also may have resulted from point-source discharges from wastewater-treatment plants. An examination of water-quality characteristics during low flow in the Cheney Reservoir watershed provided insight into the spatial variability of water-quality constituents and allowed for between-site comparisons under stable-flow conditions; identified areas of the watershed that may be of particular water-quality concern; provided a preliminary evaluation of contributions from point and nonpoint sources of contamination; and identified areas of the watershed where long-term monitoring may be appropriate to quantify perceived water-quality problems.

  6. An X-ray Investigation of the NGC 346 Field in the SMC (2): The Field Population

    NASA Technical Reports Server (NTRS)

    Naze, Y.; Hartwell, J. M.; Stevens, I. R.; Manfroid, J.; Marchenko, S.; Corcoran, M. F.; Moffat, A. F. J.; Skalkowski, G.

    2003-01-01

    We present results from a Chandra observation of the NGC 346 cluster, which is the ionizing source of N66, the most luminous HII region and the largest star formation region in the SMC. In the first part of this investigation, we have analysed the X-ray properties of the cluster itself and the remarkable star HD 5980. But the field contains additional objects of interest. In total, 79 X-ray point sources were detected in the Chandra observation: this is more than five times the number of sources detected by previous X-ray surveys. We investigate here their characteristics in detail. The sources possess rather high hardness ratios, and their cumulative luminosity function is steeper than that for the rest of the SMC at higher .luminosities. Their absorption columns suggest that most of the sources belong to NGC346. Using new UBV RI imaging with the ESO 2.2m telescope, we also discovered possible counterparts for 36 of these X-ray sources and estimated a B spectral type for a large number of these counterparts. This tends to suggest that most of the X-ray sources in the field are in fact X-ray binaries. Finally, some objects show X-ray and/or optical variability, with a need for further monitoring.

  7. The second ROSAT All-Sky Survey source catalogue: the deepest X-ray All-Sky Survey before eROSITA

    NASA Astrophysics Data System (ADS)

    Boller, T.; Freyberg, M.; Truemper, J.

    2014-07-01

    We present the second ROSAT all-sky survey source catalogue (RASS2, (Boller, Freyberg, Truemper 2014, submitted)). The RASS2 is an extension of the ROSAT Bright Source Catalogue (BSC) and the ROSAT Faint Source Catalogue (FSC). The total number of sources in the second RASS catalogue is 124489. The extensions include (i) the supply of new user data products, i.e., X-ray images, X-ray spectra, and X-ray light curves, (ii) a visual screening of each individual detection, (iii) an improved detection algorithm compared to the SASS II processing. This results into an as most as reliable and as most as complete catalogue of point sources detected during the ROSAT Survey observations. We discuss for the first time the intra-day timing and spectral properties of the second RASS catalogue. We find new highly variable sources and we discuss their timing properties. Power law fits have been applied which allows to determine X-ray fluxes, X-ray absorbing columns, and X-ray photon indices. We give access to the second RASS catalogue and the associated data products via a web-interface to allow the community to perform further scientific exploration. The RASS2 catalogue provides the deepest X-ray All-Sky Survey before eROSITA data will become available.

  8. Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study

    PubMed Central

    Pfeffer, Fabian T.; Griffin, Jamie

    2017-01-01

    Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752

  9. GLAST and Ground-Based Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    McEnery, Julie

    2008-01-01

    The launch of the Gamma-ray Large Area Space Telescope together with the advent of a new generation of ground-based gamma-ray detectors such as VERITAS, HESS, MAGIC and CANGAROO, will usher in a new era of high-energy gamma-ray astrophysics. GLAST and the ground based gamma-ray observatories will provide highly complementary capabilities for spectral, temporal and spatial studies of high energy gamma-ray sources. Joint observations will cover a huge energy range, from 20 MeV to over 20 TeV. The LAT will survey the entire sky every three hours, allowing it both to perform uniform, long-term monitoring of variable sources and to detect flaring sources promptly. Both functions complement the high-sensitivity pointed observations provided by ground-based detectors. Finally, the large field of view of GLAST will allow a study of gamma-ray emission on large angular scales and identify interesting regions of the sky for deeper studies at higher energies. In this poster, we will discuss the science returns that might result from joint GLAST/ground-based gamma-ray observations and illustrate them with detailed source simulations.

  10. The California Baseline Methane Survey

    NASA Astrophysics Data System (ADS)

    Duren, R. M.; Thorpe, A. K.; Hopkins, F. M.; Rafiq, T.; Bue, B. D.; Prasad, K.; Mccubbin, I.; Miller, C. E.

    2017-12-01

    The California Baseline Methane Survey is the first systematic, statewide assessment of methane point source emissions. The objectives are to reduce uncertainty in the state's methane budget and to identify emission mitigation priorities for state and local agencies, utilities and facility owners. The project combines remote sensing of large areas with airborne imaging spectroscopy and spatially resolved bottom-up data sets to detect, quantify and attribute emissions from diverse sectors including agriculture, waste management, oil and gas production and the natural gas supply chain. Phase 1 of the project surveyed nearly 180,000 individual facilities and infrastructure components across California in 2016 - achieving completeness rates ranging from 20% to 100% per emission sector at < 5 meters spatial resolution. Additionally, intensive studies of key areas and sectors were performed to assess source persistence and variability at times scales ranging from minutes to months. Phase 2 of the project continues with additional data collection in Spring and Fall 2017. We describe the survey design and measurement, modeling and analysis methods. We present initial findings regarding the spatial, temporal and sectoral distribution of methane point source emissions in California and their estimated contribution to the state's total methane budget. We provide case-studies and lessons learned about key sectors including examples where super-emitters were identified and mitigated. We summarize challenges and recommendations for future methane research, inventories and mitigation guidance within and beyond California.

  11. Bayesian data fusion for spatial prediction of categorical variables in environmental sciences

    NASA Astrophysics Data System (ADS)

    Gengler, Sarah; Bogaert, Patrick

    2014-12-01

    First developed to predict continuous variables, Bayesian Maximum Entropy (BME) has become a complete framework in the context of space-time prediction since it has been extended to predict categorical variables and mixed random fields. This method proposes solutions to combine several sources of data whatever the nature of the information. However, the various attempts that were made for adapting the BME methodology to categorical variables and mixed random fields faced some limitations, as a high computational burden. The main objective of this paper is to overcome this limitation by generalizing the Bayesian Data Fusion (BDF) theoretical framework to categorical variables, which is somehow a simplification of the BME method through the convenient conditional independence hypothesis. The BDF methodology for categorical variables is first described and then applied to a practical case study: the estimation of soil drainage classes using a soil map and point observations in the sandy area of Flanders around the city of Mechelen (Belgium). The BDF approach is compared to BME along with more classical approaches, as Indicator CoKringing (ICK) and logistic regression. Estimators are compared using various indicators, namely the Percentage of Correctly Classified locations (PCC) and the Average Highest Probability (AHP). Although BDF methodology for categorical variables is somehow a simplification of BME approach, both methods lead to similar results and have strong advantages compared to ICK and logistic regression.

  12. Surface-Water Nutrient Conditions and Sources in the United States Pacific Northwest1

    PubMed Central

    Wise, Daniel R; Johnson, Henry M

    2011-01-01

    Abstract The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts. PMID:22457584

  13. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  14. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  15. 40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...

  16. XMM-Newton studies of the supernova remnant G350.0-2.0

    NASA Astrophysics Data System (ADS)

    Karpova, A.; Shternin, P.; Zyuzin, D.; Danilenko, A.; Shibanov, Yu.

    2016-11-01

    We report the results of XMM-Newton observations of the Galactic mixed-morphology supernova remnant G350.0-2.0. Diffuse thermal X-ray emission fills the north-western part of the remnant surrounded by radio shell-like structures. We did not detect any X-ray counterpart of the latter structures, but found several bright blobs within the diffuse emission. The X-ray spectrum of the most part of the remnant can be described by a collisionally ionized plasma model VAPEC with solar abundances and a temperature of ≈0.8 keV. The solar abundances of plasma indicate that the X-ray emission comes from the shocked interstellar material. The overabundance of Fe was found in some of the bright blobs. We also analysed the brightest point-like X-ray source 1RXS J172653.4-382157 projected on the extended emission. Its spectrum is well described by the two-temperature optically thin thermal plasma model MEKAL typical for cataclysmic variable stars. The cataclysmic variable source nature is supported by the presence of a faint (g ≈ 21) optical source with non-stellar spectral energy distribution at the X-ray position of 1RXS J172653.4-382157. It was detected with the XMM-Newton optical/UV monitor in the U filter and was also found in the archival Hα and optical/near-infrared broad-band sky survey images. On the other hand, the X-ray spectrum is also described by the power law plus thermal component model typical for a rotation powered pulsar. Therefore, the pulsar interpretation of the source cannot be excluded. For this source, we derived the upper limit for the pulsed fraction of 27 per cent.

  17. Integrating motion, illumination, and structure in video sequences with applications in illumination-invariant tracking.

    PubMed

    Xu, Yilei; Roy-Chowdhury, Amit K

    2007-05-01

    In this paper, we present a theory for combining the effects of motion, illumination, 3D structure, albedo, and camera parameters in a sequence of images obtained by a perspective camera. We show that the set of all Lambertian reflectance functions of a moving object, at any position, illuminated by arbitrarily distant light sources, lies "close" to a bilinear subspace consisting of nine illumination variables and six motion variables. This result implies that, given an arbitrary video sequence, it is possible to recover the 3D structure, motion, and illumination conditions simultaneously using the bilinear subspace formulation. The derivation builds upon existing work on linear subspace representations of reflectance by generalizing it to moving objects. Lighting can change slowly or suddenly, locally or globally, and can originate from a combination of point and extended sources. We experimentally compare the results of our theory with ground truth data and also provide results on real data by using video sequences of a 3D face and the entire human body with various combinations of motion and illumination directions. We also show results of our theory in estimating 3D motion and illumination model parameters from a video sequence.

  18. Visualizing medium and biodistribution in complex cell culture bioreactors using in vivo imaging.

    PubMed

    Ratcliffe, E; Thomas, R J; Stacey, A J

    2014-01-01

    There is a dearth of technology and methods to aid process characterization, control and scale-up of complex culture platforms that provide niche micro-environments for some stem cell-based products. We have demonstrated a novel use of 3d in vivo imaging systems to visualize medium flow and cell distribution within a complex culture platform (hollow fiber bioreactor) to aid characterization of potential spatial heterogeneity and identify potential routes of bioreactor failure or sources of variability. This can then aid process characterization and control of such systems with a view to scale-up. Two potential sources of variation were observed with multiple bioreactors repeatedly imaged using two different imaging systems: shortcutting of medium between adjacent inlet and outlet ports with the potential to create medium gradients within the bioreactor, and localization of bioluminescent murine 4T1-luc2 cells upon inoculation with the potential to create variable seeding densities at different points within the cell growth chamber. The ability of the imaging technique to identify these key operational bioreactor characteristics demonstrates an emerging technique in troubleshooting and engineering optimization of bioreactor performance. © 2013 American Institute of Chemical Engineers.

  19. Improvement of submerged culture conditions to produce colorants by Penicillium purpurogenum

    PubMed Central

    Santos-Ebinuma, Valéria Carvalho; Roberto, Inês Conceição; Teixeira, Maria Francisca Simas; Pessoa, Adalberto

    2014-01-01

    Safety issues related to the employment of synthetic colorants in different industrial segments have increased the interest in the production of colorants from natural sources, such as microorganisms. Improved cultivation technologies have allowed the use of microorganisms as an alternative source of natural colorants. The objective of this work was to evaluate the influence of some factors on natural colorants production by a recently isolated from Amazon Forest, Penicillium purpurogenum DPUA 1275 employing statistical tools. To this purpose the following variables: orbital stirring speed, pH, temperature, sucrose and yeast extract concentrations and incubation time were studied through two fractional factorial, one full factorial and a central composite factorial designs. The regression analysis pointed out that sucrose and yeast extract concentrations were the variables that influenced more in colorants production. Under the best conditions (yeast extract concentration around 10 g/L and sucrose concentration of 50 g/L) an increase of 10, 33 and 23% respectively to yellow, orange and red colorants absorbance was achieved. These results show that P. purpurogenum is an alternative colorants producer and the production of these biocompounds can be improved employing statistical tool. PMID:25242965

  20. Evidence from the Soudan 1 experiment for underground muons associated with Cygnus X-3

    NASA Technical Reports Server (NTRS)

    Ayres, D. S. E.

    1986-01-01

    The Soudan 1 experiment has yielded evidence for an average underground muon flux of approximately 7 x 10 to the minus 11th power/sq cm/s which points back to the X-ray binary Cygnus X-3, and which exhibits the 4.8 h periodicity observed for other radiation from this source. Underground muon events which seem to be associated with Cygnus X-3 also show evidence for longer time variability of the flux. Such underground muons cannot be explained by any conventional models of the propagation and interaction of cosmic rays.

  1. Exposure Assessment for Atmospheric Ultrafine Particles (UFPs) and Implications in Epidemiologic Research

    PubMed Central

    Sioutas, Constantinos; Delfino, Ralph J.; Singh, Manisha

    2005-01-01

    Epidemiologic research has shown increases in adverse cardiovascular and respiratory outcomes in relation to mass concentrations of particulate matter (PM) ≤2.5 or ≤10 μm in diameter (PM2.5, PM10, respectively). In a companion article [Delfino RJ, Sioutas C, Malik S. 2005. Environ Health Perspect 113(8):934–946]), we discuss epidemiologic evidence pointing to underlying components linked to fossil fuel combustion. The causal components driving the PM associations remain to be identified, but emerging evidence on particle size and chemistry has led to some clues. There is sufficient reason to believe that ultrafine particles < 0.1 μm (UFPs) are important because when compared with larger particles, they have order of magnitudes higher particle number concentration and surface area, and larger concentrations of adsorbed or condensed toxic air pollutants (oxidant gases, organic compounds, transition metals) per unit mass. This is supported by evidence of significantly higher in vitro redox activity by UFPs than by larger PM. Although epidemiologic research is needed, exposure assessment issues for UFPs are complex and need to be considered before undertaking investigations of UFP health effects. These issues include high spatial variability, indoor sources, variable infiltration of UFPs from a variety of outside sources, and meteorologic factors leading to high seasonal variability in concentration and composition, including volatility. To address these issues, investigators need to develop as well as validate the analytic technologies required to characterize the physical/chemical nature of UFPs in various environments. In the present review, we provide a detailed discussion of key characteristics of UFPs, their sources and formation mechanisms, and methodologic approaches to assessing population exposures. PMID:16079062

  2. Carbon and nitrogen stoichiometry across stream ecosystems

    NASA Astrophysics Data System (ADS)

    Wymore, A.; Kaushal, S.; McDowell, W. H.; Kortelainen, P.; Bernhardt, E. S.; Johnes, P.; Dodds, W. K.; Johnson, S.; Brookshire, J.; Spencer, R.; Rodriguez-Cardona, B.; Helton, A. M.; Barnes, R.; Argerich, A.; Haq, S.; Sullivan, P. L.; López-Lloreda, C.; Coble, A. A.; Daley, M.

    2017-12-01

    Anthropogenic activities are altering carbon and nitrogen concentrations in surface waters globally. The stoichiometry of carbon and nitrogen regulates important watershed biogeochemical cycles; however, controls on carbon and nitrogen ratios in aquatic environments are poorly understood. Here we use a multi-biome and global dataset (tropics to Arctic) of stream water chemistry to assess relationships between dissolved organic carbon (DOC) and nitrate, ammonium and dissolved organic nitrogen (DON), providing a new conceptual framework to consider interactions between DOC and the multiple forms of dissolved nitrogen. We found that across streams the total dissolved nitrogen (TDN) pool is comprised of very little ammonium and as DOC concentrations increase the TDN pool shifts from nitrate to DON dominated. This suggests that in high DOC systems, DON serves as the primary source of nitrogen. At the global scale, DOC and DON are positively correlated (r2 = 0.67) and the average C: N ratio of dissolved organic matter (molar ratio of DOC: DON) across our data set is approximately 31. At the biome and smaller regional scale the relationship between DOC and DON is highly variable (r2 = 0.07 - 0.56) with the strongest relationships found in streams draining the mixed temperate forests of the northeastern United States. DOC: DON relationships also display spatial and temporal variability including latitudinal and seasonal trends, and interactions with land-use. DOC: DON ratios correlated positively with gradients of energy versus nutrient limitation pointing to the ecological role (energy source versus nutrient source) that DON plays with stream ecosystems. Contrary to previous findings we found consistently weak relationships between DON and nitrate which may reflect DON's duality as an energy or nutrient source. Collectively these analyses demonstrate how gradients of DOC drive compositional changes in the TDN pool and reveal a high degree of variability in the C: N ratio (3-100) of stream water dissolved organic matter.

  3. Selected nutrients and pesticides in streams of the eastern Iowa basins, 1970-95

    USGS Publications Warehouse

    Schnoebelen, Douglas J.; Becher, Kent D.; Bobier, Matthew W.; Wilton, Thomas

    1999-01-01

     The statistical analysis of the nutrient data typically indicated a strong positive correlation of nitrate with streamflow. Total phosphorus concentrations with streamflow showed greater variability than nitrate, perhaps reflecting the greater potential of transport of phosphorus on sediment rather than in the dissolved phase as with nitrate. Ammonia and ammonia plus organic nitrogen showed no correlation with streamflow or a weak positive correlation. Seasonal variations and the relations of nutrients and pesticides to streamflow generally corresponded with nonpoint‑source loadings, although possible point sources for nutrients were indicated by the data at selected monitoring sites. Statistical trend tests for concentrations and loads were computed for nitrate, ammonia, and total phosphorus. Trend analysis indicated decreases for ammonia and total phosphorus concentrations at several sites and increases for nitrate concentrations at other sites in the study unit.

  4. Determination of optical properties in heterogeneous turbid media using a cylindrical diffusing fiber

    NASA Astrophysics Data System (ADS)

    Dimofte, Andreea; Finlay, Jarod C.; Liang, Xing; Zhu, Timothy C.

    2012-10-01

    For interstitial photodynamic therapy (PDT), cylindrical diffusing fibers (CDFs) are often used to deliver light. This study examines the feasibility and accuracy of using CDFs to characterize the absorption (μa) and reduced scattering (μ‧s) coefficients of heterogeneous turbid media. Measurements were performed in tissue-simulating phantoms with μa between 0.1 and 1 cm-1 and μ‧s between 3 and 10 cm-1 with CDFs 2 to 4 cm in length. Optical properties were determined by fitting the measured light fluence rate profiles at a fixed distance from the CDF axis using a heterogeneous kernel model in which the cylindrical diffusing fiber is treated as a series of point sources. The resulting optical properties were compared with independent measurement using a point source method. In a homogenous medium, we are able to determine the absorption coefficient μa using a value of μ‧s determined a priori (uniform fit) or μ‧s obtained by fitting (variable fit) with standard (maximum) deviations of 6% (18%) and 18% (44%), respectively. However, the CDF method is found to be insensitive to variations in μ‧s, thus requiring a complementary method such as using a point source for determination of μ‧s. The error for determining μa decreases in very heterogeneous turbid media because of the local absorption extremes. The data acquisition time for obtaining the one-dimensional optical properties distribution is less than 8 s. This method can result in dramatically improved accuracy of light fluence rate calculation for CDFs for prostate PDT in vivo when the same model and geometry is used for forward calculations using the extrapolated tissue optical properties.

  5. Energy storage connection system

    DOEpatents

    Benedict, Eric L.; Borland, Nicholas P.; Dale, Magdelena; Freeman, Belvin; Kite, Kim A.; Petter, Jeffrey K.; Taylor, Brendan F.

    2012-07-03

    A power system for connecting a variable voltage power source, such as a power controller, with a plurality of energy storage devices, at least two of which have a different initial voltage than the output voltage of the variable voltage power source. The power system includes a controller that increases the output voltage of the variable voltage power source. When such output voltage is substantially equal to the initial voltage of a first one of the energy storage devices, the controller sends a signal that causes a switch to connect the variable voltage power source with the first one of the energy storage devices. The controller then causes the output voltage of the variable voltage power source to continue increasing. When the output voltage is substantially equal to the initial voltage of a second one of the energy storage devices, the controller sends a signal that causes a switch to connect the variable voltage power source with the second one of the energy storage devices.

  6. Non-domestic phosphorus release in rivers during low-flow: Mechanisms and implications for sources identification

    NASA Astrophysics Data System (ADS)

    Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael

    2018-05-01

    A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.

  7. Number density distribution of near-infrared sources on a sub-degree scale in the Galactic center: Comparison with the Fe XXV Kα line at 6.7 keV

    NASA Astrophysics Data System (ADS)

    Yasui, Kazuki; Nishiyama, Shogo; Yoshikawa, Tatsuhito; Nagatomo, Schun; Uchiyama, Hideki; Tsuru, Takeshi Go; Koyama, Katsuji; Tamura, Motohide; Kwon, Jungmi; Sugitani, Koji; Schödel, Rainer; Nagata, Tetsuya

    2015-12-01

    The stellar distribution derived from an H- and KS-band survey of the central region of our Galaxy is compared with the Fe XXV Kα (6.7 keV) line intensity observed with the Suzaku satellite. The survey is for the galactic coordinates |l| ≲ 3.0° and |b | ≲ 1.0° (equivalent to 0.8 kpc × 0.3 kpc for R⊙ = 8 kpc), and the number-density distribution N(KS,0; l, b) of stars is derived by using the extinction-corrected magnitude KS,0 = 10.5. This is deep enough to probe the old red-giant population and in turn to estimate the (l, b) distribution of faint X-ray point sources such as coronally active binaries and cataclysmic variables. In the Galactic plane (b = 0°), N(10.5; l, b) increases in the direction of the Galactic center as |l|-0.30±0.03 in the range of - 0.1° ≥ l ≥ - 0.7°, but this increase is significantly slower than the increase (|l|-0.44±0.02) of the Fe XXV Kα line intensity. If normalized with the ratios in the outer region 1.5° ≤ |l| ≤ 2.8°, where faint X-ray point sources are argued to dominate the diffuse Galactic X-ray ridge emission, the excess of the Fe XXV Kα line intensity over the stellar number density is at least a factor of two at |l| = 0.1°. This indicates that a significant part of the Galactic-center diffuse emission arises from a truly diffuse optically thin thermal plasma, and not from an unresolved collection of faint X-ray point sources related to the old stellar population.

  8. Calculation and analysis of the non-point source pollution in the upstream watershed of the Panjiakou Reservoir, People's Republic of China

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Tang, L.

    2007-05-01

    Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.

  9. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    PubMed

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  10. To Grid or Not to Grid… Precipitation Data and Hydrological Modeling in the Khangai Mountain Region of Mongolia

    NASA Astrophysics Data System (ADS)

    Venable, N. B. H.; Fassnacht, S. R.; Adyabadam, G.

    2014-12-01

    Precipitation data in semi-arid and mountainous regions is often spatially and temporally sparse, yet it is a key variable needed to drive hydrological models. Gridded precipitation datasets provide a spatially and temporally coherent alternative to the use of point-based station data, but in the case of Mongolia, may not be constructed from all data available from government data sources, or may only be available at coarse resolutions. To examine the uncertainty associated with the use of gridded and/or point precipitation data, monthly water balance models of three river basins across forest steppe (the Khoid Tamir River at Ikhtamir), steppe (the Baidrag River at Bayanburd), and desert steppe (the Tuin River at Bogd) ecozones in the Khangai Mountain Region of Mongolia were compared. The models were forced over a 10-year period from 2001-2010, with gridded temperature and precipitation data at a 0.5 x 0.5 degree resolution. These results were compared to modeling using an interpolated hybrid of the gridded data and additional point data recently gathered from government sources; and with point data from the nearest meteorological station to the streamflow gage of choice. Goodness-of-fit measures including the Nash-Sutcliff Efficiency statistic, the percent bias, and the RMSE-observations standard deviation ratio were used to assess model performance. The results were mixed with smaller differences between the two gridded products as compared to the differences between gridded products and station data. The largest differences in precipitation inputs and modeled runoff amounts occurred between the two gridded datasets and station data in the desert steppe (Tuin), and the smallest differences occurred in the forest steppe (Khoid Tamir) and steppe (Baidrag). Mean differences between water balance model results are generally smaller than mean differences in the initial input data over the period of record. Seasonally, larger differences in gridded versus station-based precipitation products and modeled outputs occur in summer in the desert-steppe, and in spring in the forest steppe. Choice of precipitation data source in terms of gridded or point-based data directly affects model outcomes with greater uncertainty noted on a seasonal basis across ecozones of the Khangai.

  11. Hydrologic control of dissolved organic matter concentration and quality in a semiarid artificially drained agricultural catchment

    NASA Astrophysics Data System (ADS)

    Bellmore, Rebecca A.; Harrison, John A.; Needoba, Joseph A.; Brooks, Erin S.; Kent Keller, C.

    2015-10-01

    Agricultural practices have altered watershed-scale dissolved organic matter (DOM) dynamics, including in-stream concentration, biodegradability, and total catchment export. However, mechanisms responsible for these changes are not clear, and field-scale processes are rarely directly linked to the magnitude and quality of DOM that is transported to surface water. In a small (12 ha) agricultural catchment in eastern Washington State, we tested the hypothesis that hydrologic connectivity in a catchment is the dominant control over the concentration and quality of DOM exported to surface water via artificial subsurface drainage. Concentrations of dissolved organic carbon (DOC) and humic-like components of DOM decreased while the Fluorescence Index and Freshness Index increased with depth through the soil profile. In drain discharge, these characteristics were significantly correlated with drain flow across seasons and years, with drain DOM resembling deep sources during low-flow and shallow sources during high flow, suggesting that DOM from shallow sources bypasses removal processes when hydrologic connectivity in the catchment is greatest. Assuming changes in streamflow projected for the Palouse River (which contains the study catchment) under the A1B climate scenario (rapid growth, dependence on fossil fuel, and renewable energy sources) apply to the study catchment, we project greater interannual variability in annual DOC export in the future, with significant increases in the driest years. This study highlights the variability in DOM inputs from agricultural soil to surface water on daily to interannual time scales, pointing to the need for a more nuanced understanding of agricultural impacts on DOM dynamics in surface water.

  12. HESS J1844-030: A New Gamma-Ray Binary?

    NASA Astrophysics Data System (ADS)

    McCall, Hannah; Errando, Manel

    2018-01-01

    Gamma-ray binaries are comprised of a massive, main-sequence star orbiting a neutron star or black hole that generates bright gamma-ray emission. Only six of these systems have been discovered. Here we report on a candidate stellar-binary system associated with the unidentified gamma-ray source HESS J1844-030, whose detection was revealed in the H.E.S.S. galactic plane survey. Analysis of 60 ks of archival Chandra data and over 100 ks of XMM-Newton data reveal a spatially associated X-ray counterpart to this TeV-emitting source (E>1012 eV), CXO J1845-031. The X-ray spectra derived from these exposures yields column density absorption in the range nH = (0.4 - 0.7) x 1022 cm-2, which is below the total galactic value for that part of the sky, indicating that the source is galactic. The flux from CXO J1845-031 increases with a factor of up to 2.5 in a 60 day timescale, providing solid evidence for flux variability at a confidence level exceeding 7 standard deviations. The point-like nature of the source, the flux variability of the nearby X-ray counterpart, and the low column density absorption are all indicative of a binary system. Once confirmed, HESS J1844-030 would represent only the seventh known gamma-ray binary, providing valuable data to advance our understanding of the physics of pulsars and stellar winds and testing high-energy astrophysical processes at timescales not present in other classes of objects.

  13. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    PubMed Central

    Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.

    2014-01-01

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630

  14. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less

  15. How to COAAD Images. I. Optimal Source Detection and Photometry of Point Sources Using Ensembles of Images

    NASA Astrophysics Data System (ADS)

    Zackay, Barak; Ofek, Eran O.

    2017-02-01

    Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.

  16. Noise Removal on Ocean Scalars by Means of Singularity-Based Fusion

    NASA Astrophysics Data System (ADS)

    Umbert, M.; Turiel, A.; Hoareau, N.; Ballabrera, J.; Martinez, J.; guimbard, S.; Font, J.

    2013-12-01

    Thanks to new remote sensing platforms as SMOS and Aquarius we have now access to synoptic maps of Sea Surface Salinity (SSS) at global scale. Both missions require a non-negligible amount of development in order to meet pre-launch requirements on the quality of the retrieved variables. Development efforts have been so far mainly concentrated in improving the accuracy of the acquired signals from the radiometric point of view, which is a point-wise characteristic, that is, the qualities of each point in the snapshot or swath are considered separately. However, some spatial redundancy (i.e., spatial correlation) is implicit in geophysical signals, and particularly in SSS. This redundancy is known since the beginning of the remote sensing age: eddies and fronts are visually evident in images of different variables, including Sea Surface Temperature (SST), Sea Surface Height (SSH), Ocean Color (OC), Synthetic Aperture Radars (SAR) and Brightness Temperatures (BT) at different bands. An assessment on the quality of SSS products accounting for this kind of spatial redundancy would be very interesting. So far, the structure of those correlations have been evidenced using correlation functions, but correlation functions vary from one variable to other; additionally, they are not characteristic to the points of the image but to a given large enough area. The introduction of singularity analysis for remote sensing maps of the ocean has shown that the correspondence among different scalars can be rigorously stated in terms of the correspondence of the values of their associated singularity exponents. The singularity exponents of a scalar at a given point is a unitless measure of the degree of regularity or irregularity of this function at that given point. Hence, singularity exponents can be directly compared disregarding the physical meaning of the variable from which they were derived. Using singularity analysis we can assess the quality of any scalar, as singularity exponents align in fronts following the streamlines of the flow, while noise breaks up the coherence of singularity fronts. The analysis of the output of numerical models show that up to the numerical accuracy singularity exponents of different scalars take the same values at every point. Taking the correspondence of the singularity exponents into account, it can be proved that two scalars having the same singularity exponents have a relation of functional dependence (a matricial identity involving their gradients). That functional relation can be approximated by a local linear regression under some hypothesis, which simplifies and speeds up the calculations and leads to a simple algorithm to reduce noise on a given ocean scalar using another higher- quality variable as template. This simple algorithm has been applied to SMOS data with a considerable quality gain. As a template, high-level SST maps from different sources have been used, while SMOS L2 and L3 SSS maps, and even brightness temperature maps play the role of the noisy data to be corrected. In all instances the noise level is divided by a factor of two at least. This quality gain opens the use of SMOS data for new applications, including the instant identification of ocean fronts, rain lenses, hurricane tracks, etc.

  17. Source apportionment of speciated PM2.5 and non-parametric regressions of PM2.5 and PM(coarse) mass concentrations from Denver and Greeley, Colorado, and construction and evaluation of dichotomous filter samplers

    NASA Astrophysics Data System (ADS)

    Piedrahita, Ricardo A.

    The Denver Aerosol Sources and Health study (DASH) was a long-term study of the relationship between the variability in fine particulate mass and chemical constituents (PM2.5, particulate matter less than 2.5mum) and adverse health effects such as cardio-respiratory illnesses and mortality. Daily filter samples were chemically analyzed for multiple species. We present findings based on 2.8 years of DASH data, from 2003 to 2005. Multilinear Engine 2 (ME-2), a receptor-based source apportionment model was applied to the data to estimate source contributions to PM2.5 mass concentrations. This study relied on two different ME-2 models: (1) a 2-way model that closely reflects PMF-2; and (2) an enhanced model with meteorological data that used additional temporal and meteorological factors. The Coarse Rural Urban Sources and Health study (CRUSH) is a long-term study of the relationship between the variability in coarse particulate mass (PMcoarse, particulate matter between 2.5 and 10mum) and adverse health effects such as cardio-respiratory illnesses, pre-term births, and mortality. Hourly mass concentrations of PMcoarse and fine particulate matter (PM2.5) are measured using tapered element oscillating microbalances (TEOMs) with Filter Dynamics Measurement Systems (FDMS), at two rural and two urban sites. We present findings based on nine months of mass concentration data, including temporal trends, and non-parametric regressions (NPR) results, which were used to characterize the wind speed and wind direction relationships that might point to sources. As part of CRUSH, 1-year coarse and fine mode particulate matter filter sampling network, will allow us to characterize the chemical composition of the particulate matter collected and perform spatial comparisons. This work describes the construction and validation testing of four dichotomous filter samplers for this purpose. The use of dichotomous splitters with an approximate 2.5mum cut point, coupled with a 10mum cut diameter inlet head allows us to collect the separated size fractions that the collocated TEOMs collect continuously. Chemical analysis of the filters will include inorganic ions, organic compounds, EC, OC, and biological analyses. Side by side testing showed the cut diameters were in agreement with each other, and with a well characterized virtual impactor lent to the group by the University of Southern California. Error propagation was performed and uncertainty results were similar to the observed standard deviations.

  18. Variability in Spatially and Temporally Resolved Emissions and Hydrocarbon Source Fingerprints for Oil and Gas Sources in Shale Gas Production Regions.

    PubMed

    Allen, David T; Cardoso-Saldaña, Felipe J; Kimura, Yosuke

    2017-10-17

    A gridded inventory for emissions of methane, ethane, propane, and butanes from oil and gas sources in the Barnett Shale production region has been developed. This inventory extends previous spatially resolved inventories of emissions by characterizing the overall variability in emission magnitudes and the composition of emissions at an hourly time resolution. The inventory is divided into continuous and intermittent emission sources. Sources are defined as continuous if hourly averaged emissions are greater than zero in every hour; otherwise, they are classified as intermittent. In the Barnett Shale, intermittent sources accounted for 14-30% of the mean emissions for methane and 10-34% for ethane, leading to spatial and temporal variability in the location of hourly emissions. The combined variability due to intermittent sources and variability in emission factors can lead to wide confidence intervals in the magnitude and composition of time and location-specific emission inventories; therefore, including temporal and spatial variability in emission inventories is important when reconciling inventories and observations. Comparisons of individual aircraft measurement flights conducted in the Barnett Shale region versus the estimated emission rates for each flight from the emission inventory indicate agreement within the expected variability of the emission inventory for all flights for methane and for all but one flight for ethane.

  19. Discovery of Periodic Dips in the Brightest Hard X-Ray Source of M31 with EXTraS

    NASA Astrophysics Data System (ADS)

    Marelli, Martino; Tiengo, Andrea; De Luca, Andrea; Salvetti, David; Saronni, Luca; Sidoli, Lara; Paizis, Adamantia; Salvaterra, Ruben; Belfiore, Andrea; Israel, Gianluca; Haberl, Frank; D’Agostino, Daniele

    2017-12-01

    We performed a search for eclipsing and dipping sources in the archive of the EXTraS project—a systematic characterization of the temporal behavior of XMM-Newton point sources. We discovered dips in the X-ray light curve of 3XMM J004232.1+411314, which has been recently associated with the hard X-ray source dominating the emission of M31. A systematic analysis of XMM-Newton observations revealed 13 dips in 40 observations (total exposure time of ∼0.8 Ms). Among them, four observations show two dips, separated by ∼4.01 hr. Dip depths and durations are variable. The dips occur only during low-luminosity states ({L}0.2{--12}< 1× {10}38 erg s‑1), while the source reaches {L}0.2{--12}∼ 2.8× {10}38 erg s‑1. We propose that this system is a new dipping low-mass X-ray binary in M31 seen at high inclination (60°–80°) the observed dipping periodicity is the orbital period of the system. A blue HST source within the Chandra error circle is the most likely optical counterpart of the accretion disk. The high luminosity of the system makes it the most luminous (not ULX) dipper known to date.

  20. A Deep Chandra ACIS Survey of M51

    NASA Astrophysics Data System (ADS)

    Kuntz, K. D.; Long, Knox S.; Kilgard, Roy E.

    2016-08-01

    We have obtained a deep X-ray image of the nearby galaxy M51 using Chandra. Here we present the catalog of X-ray sources detected in these observations and provide an overview of the properties of the point-source population. We find 298 sources within the D 25 radii of NGC 5194/5, of which 20% are variable, a dozen are classical transients, and another half dozen are transient-like sources. The typical number of active ultraluminous X-ray sources in any given observation is ˜5, and only two of those sources persist in an ultraluminous state over the 12 yr of observations. Given reasonable assumptions about the supernova remnant population, the luminosity function is well described by a power law with an index between 1.55 and 1.7, only slightly shallower than that found for populations dominated by high-mass X-ray binaries (HMXBs), which suggests that the binary population in NGC 5194 is also dominated by HMXBs. The luminosity function of NGC 5195 is more consistent with a low-mass X-ray binary dominated population. Based on observations made with NASA's Chandra X-ray Observatory, which is operated by the Smithsonian Astrophysical Observatory under contract #NAS83060, and the data were obtained through program GO1-12115.

  1. Simulating variable source problems via post processing of individual particle tallies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.

    2000-10-20

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less

  2. Complex earthquake rupture and local tsunamis

    USGS Publications Warehouse

    Geist, E.L.

    2002-01-01

    In contrast to far-field tsunami amplitudes that are fairly well predicted by the seismic moment of subduction zone earthquakes, there exists significant variation in the scaling of local tsunami amplitude with respect to seismic moment. From a global catalog of tsunami runup observations this variability is greatest for the most frequently occuring tsunamigenic subduction zone earthquakes in the magnitude range of 7 < Mw < 8.5. Variability in local tsunami runup scaling can be ascribed to tsunami source parameters that are independent of seismic moment: variations in the water depth in the source region, the combination of higher slip and lower shear modulus at shallow depth, and rupture complexity in the form of heterogeneous slip distribution patterns. The focus of this study is on the effect that rupture complexity has on the local tsunami wave field. A wide range of slip distribution patterns are generated using a stochastic, self-affine source model that is consistent with the falloff of far-field seismic displacement spectra at high frequencies. The synthetic slip distributions generated by the stochastic source model are discretized and the vertical displacement fields from point source elastic dislocation expressions are superimposed to compute the coseismic vertical displacement field. For shallow subduction zone earthquakes it is demonstrated that self-affine irregularities of the slip distribution result in significant variations in local tsunami amplitude. The effects of rupture complexity are less pronounced for earthquakes at greater depth or along faults with steep dip angles. For a test region along the Pacific coast of central Mexico, peak nearshore tsunami amplitude is calculated for a large number (N = 100) of synthetic slip distribution patterns, all with identical seismic moment (Mw = 8.1). Analysis of the results indicates that for earthquakes of a fixed location, geometry, and seismic moment, peak nearshore tsunami amplitude can vary by a factor of 3 or more. These results indicate that there is substantially more variation in the local tsunami wave field derived from the inherent complexity subduction zone earthquakes than predicted by a simple elastic dislocation model. Probabilistic methods that take into account variability in earthquake rupture processes are likely to yield more accurate assessments of tsunami hazards.

  3. Spherically symmetric Einstein-aether perfect fluid models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coley, Alan A.; Latta, Joey; Leon, Genly

    We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less

  4. Variable allelopathy among phytoplankton reflected in red tide metabolome.

    PubMed

    Poulin, Remington X; Poulson-Ellestad, Kelsey L; Roy, Jessie S; Kubanek, Julia

    2018-01-01

    Harmful algae are known to utilize allelopathy, the release of compounds that inhibit competitors, as a form of interference competition. Competitor responses to allelopathy are species-specific and allelopathic potency of producing algae is variable. In the current study, the biological variability in allelopathic potency was mapped to the underlying chemical variation in the exuded metabolomes of five genetic strains of the red tide dinoflagellate Karenia brevis using 1 H nuclear magnetic resonance (NMR) spectroscopy. The impacts of K. brevis allelopathy on growth of a model competitor, Asterionellopsis glacialis, ranged from strongly inhibitory to negligible to strongly stimulatory. Unique metabolomes of K. brevis were visualized as chemical fingerprints, suggesting three distinct metabolic modalities - allelopathic, non-allelopathic, and stimulatory - with each modality distinguished from the others by different concentrations of several metabolites. Allelopathic K. brevis was characterized by enhanced concentrations of fatty acid-derived lipids and aromatic or other polyunsaturated compounds, relative to less allelopathic K. brevis. These findings point to a previously untapped source of information in the study of allelopathy: the chemical variability of phytoplankton, which has been underutilized in the study of bloom dynamics and plankton chemical ecology. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. PubMed

    La Torre, Giuseppe; Verrengia, Giovanna; Saulle, Rossella; Kheiraoui, Flavia; Mannocci, Alice

    2017-06-28

    To identify the determinants of the regional differences in work injuries and mortality rates in Italy. Several linear regression models were built assessing the association between regional differences in work mortality and injury rates (as dependent variables) and socio-demographic factors (occupation and population) and variables describing alcohol consumption, mean age and availability of health care (as independent variables). Data sources are from ISTAT, INAIL, Health for All database and the national report Osservasalute. The analysis was carried out using data coming from all the Italian Regions. The mean work mortality rate for the period 2006-2014 was 7.73 (DS 1.85) per 100,000 workers, while the injury rate was 4503.1 (DS 1413.5) per 100,000 workers. Socio-demographic variables and that related to health care (TC availability) were inversely associated with mortality rates, while for the work injury rates, significant associations with alcohol were found, while Gross domestic product and TC availability were inversely associated. The study pointed out the extreme heterogeneity between different geographical areas in the field of work injury, due to different socio-demographic and economic factors. In the future, health surveillance and work injury and mortality rates could be improved in areas at high risk.

  6. Ghost imaging with bucket detection and point detection

    NASA Astrophysics Data System (ADS)

    Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao

    2018-04-01

    We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.

  7. Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu

    2015-05-01

    Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less

  8. Strong ground motion simulation of the 2016 Kumamoto earthquake of April 16 using multiple point sources

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yosuke; Nozu, Atsushi

    2017-02-01

    The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.

  9. STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu

    2011-09-10

    An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less

  10. MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID

    EPA Science Inventory

    Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...

  11. SAIP2014, the 59th Annual Conference of the South African Institute of Physics

    NASA Astrophysics Data System (ADS)

    Engelbrecht, Chris; Karataglidis, Steven

    2015-04-01

    The International Celestial Reference Frame (ICRF) was adopted by the International Astronomical Union (IAU) in 1997. The current standard, the ICRF-2, is based on Very Long Baseline Interferometric (VLBI) radio observations of positions of 3414 extragalactic radio reference sources. The angular resolution achieved by the VLBI technique is on a scale of milliarcsecond to sub-milliarcseconds and defines the ICRF with the highest accuracy available at present. An ideal reference source used for celestial reference frame work should be unresolved or point-like on these scales. However, extragalactic radio sources, such as those that definevand maintain the ICRF, can exhibit spatially extended structures on sub-milliarsecond scalesvthat may vary both in time and frequency. This variability can introduce a significant error in the VLBI measurements thereby degrading the accuracy of the estimated source position. Reference source density in the Southern celestial hemisphere is also poor compared to the Northern hemisphere, mainly due to the limited number of radio telescopes in the south. In order to dene the ICRF with the highest accuracy, observational efforts are required to find more compact sources and to monitor their structural evolution. In this paper we show that the astrometric VLBI sessions can be used to obtain source structure information and we present preliminary imaging results for the source J1427-4206 at 2.3 and 8.4 GHz frequencies which shows that the source is compact and suitable as a reference source.

  12. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.

    2001-12-01

    In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.

  13. Three-parameter optical studies in Scottish coastal waters

    NASA Astrophysics Data System (ADS)

    McKee, David; Cunningham, Alex; Jones, Ken

    1997-02-01

    A new submersible optical instrument has been constructed which allows chlorophyll fluorescence, attenuation and wide- angle scattering measurements to be made simultaneously at he same point in a body of water. The instrument sues a single xenon flashlamp as the light source, and incorporates its own power supply and microprocessor based data logging system. It has ben cross-calibrated against commercial single-parameter instruments using a range of non-algal particles and phytoplankton cultures. The equipment has been deployed at sea in the Firth of Clyde and Loch Linnhe, where is has been used to study seasonal variability in optical water column structure. Results will be presented to illustrate how ambiguity in the interpretation of measurements of a single optical parameter can be alleviated by measuring several parameters simultaneously. Comparative studies of differences in winter and spring relationships between optical variable shave also ben carried out.

  14. Spatial correlation analysis of seismic noise for STAR X-ray infrastructure design

    NASA Astrophysics Data System (ADS)

    D'Alessandro, Antonino; Agostino, Raffaele; Festa, Lorenzo; Gervasi, Anna; Guerra, Ignazio; Palmer, Dennis T.; Serafini, Luca

    2014-05-01

    The Italian PON MaTeRiA project is focused on the creation of a research infrastructure open to users based on an innovative and evolutionary X-ray source. This source, named STAR (Southern Europe TBS for Applied Research), exploits the Thomson backscattering process of a laser radiation by fast-electron beams (Thomson Back Scattering - TBS). Its main performances are: X-ray photon flux 109-1010 ph/s, Angular divergence variable between 2 and 10 mrad, X-ray energy continuously variable between 8 keV and 150 keV, Bandwidth ΔE/E variable between 1 and 10%, ps time resolved structure. In order to achieve this performances, bunches of electrons produced by a photo-injector are accelerated to relativistic velocities by a linear accelerator section. The electron beam, few hundreds of micrometer wide, is driven by magnetic fields to the interaction point along a 15 m transport line where it is focused in a 10 micrometer-wide area. In the same area, the laser beam is focused after being transported along a 12 m structure. Ground vibrations could greatly affect the collision probability and thus the emittance by deviating the paths of the beams during their travel in the STAR source. Therefore, the study program to measure ground vibrations in the STAR site can be used for site characterization in relation to accelerator design. The environmental and facility noise may affect the X-ray operation especially if the predominant wavelengths in the microtremor wavefield are much smaller than the size of the linear accelerator. For wavelength much greater, all the accelerator parts move in phase, and therefore also large displacements cannot generate any significant effect. On the other hand, for wavelengths equal or less than half the accelerator size several parts could move in phase opposition and therefore small displacements could affect its proper functioning. Thereafter, it is important to characterize the microtremor wavefield in both frequencies and wavelengths domains. For this reason, we performed some measurements of seismic noise in order to characterize the environmental noise in the site in which the X-ray accelerator arise. For the characterization of the site, we carried out several passive seismic monitoring experiments at different times of the day and in different weather conditions. We recorded microtremor using an array of broadband 3C seismic sensors arranged along the linear accelerator. For each measurement point, we determined the displacement, velocity and acceleration spectrogram and power spectral density of both horizontal and vertical components. We determined also the microtremor horizontal to vertical spectral ratio as function of azimuth to individuate the main ground vibration direction and detect the existence of site or building resonance frequencies. We applied a rotation matrix to transform the North-South and East-West signal components in transversal and radial components, respect to the direction of the linear accelerator. Subsequently, for each couple of seismic stations we determined the coherence function to analyze the seismic noise spatial correlation. These analyses have allowed us to exhaustively characterize the seismic noise of the study area, from the point of view of the power and space-time variability, both in frequency and wavelength.

  15. A simple tool to predict admission at the time of triage.

    PubMed

    Cameron, Allan; Rodgers, Kenneth; Ireland, Alastair; Jamdar, Ravi; McKay, Gerard A

    2015-03-01

    To create and validate a simple clinical score to estimate the probability of admission at the time of triage. This was a multicentre, retrospective, cross-sectional study of triage records for all unscheduled adult attendances in North Glasgow over 2 years. Clinical variables that had significant associations with admission on logistic regression were entered into a mixed-effects multiple logistic model. This provided weightings for the score, which was then simplified and tested on a separate validation group by receiving operator characteristic (ROC) analysis and goodness-of-fit tests. 215 231 presentations were used for model derivation and 107 615 for validation. Variables in the final model showing clinically and statistically significant associations with admission were: triage category, age, National Early Warning Score (NEWS), arrival by ambulance, referral source and admission within the last year. The resulting 6-variable score showed excellent admission/discharge discrimination (area under ROC curve 0.8774, 95% CI 0.8752 to 0.8796). Higher scores also predicted early returns for those who were discharged: the odds of subsequent admission within 28 days doubled for every 7-point increase (log odds=+0.0933 per point, p<0.0001). This simple, 6-variable score accurately estimates the probability of admission purely from triage information. Most patients could accurately be assigned to 'admission likely', 'admission unlikely', 'admission very unlikely' etc., by setting appropriate cut-offs. This could have uses in patient streaming, bed management and decision support. It also has the potential to control for demographics when comparing performance over time or between departments. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  17. An Experimental Investigation of Transverse Tension Fatigue Characterization of IM6/3501-6 Composite Materials Using a Three-Point Bend Test

    NASA Technical Reports Server (NTRS)

    Peck, Ann W.

    1998-01-01

    As composites are introduced into more complex structures with out-of-plane loadings, a better understanding is needed of the out-of-plane, matrix-dominated failure mechanisms. This work investigates the transverse tension fatigue characteristics of IM6/3501 composite materials. To test the 90 degree laminae, a three-point bend test was chosen, potentially minimizing handling and gripping issues associated with tension tests. A finite element analysis was performed of a particular specimen configuration to investigate the influence of specimen size on the stress distribution for a three-point bend test. Static testing of 50 specimens of 9 different sized configurations produced a mean transverse tensile strength of 61.3 Mpa (8.0 ksi). The smallest configuration (10.2 mm wide, Span-to-thickness ratio of 3) consistently exhibited transverse tensile failures. A volume scale effect was difficult to discern due to the large scatter of the data. Static testing of 10 different specimens taken from a second panel produced a mean transverse tensile strength of 82.7 Mpa (12.0 ksi). Weibull parameterization of the data was possible, but due to variability in raw material and/or manufacturing, more replicates are needed for greater confidence. Three-point flex fatigue testing of the smallest configuration was performed on 59 specimens at various levels of the mean static transverse tensile strength using an R ratio of 0.1 and a frequency of 20 Hz. A great deal of scatter was seen in the data. The majority of specimens failed near the center loading roller. To determine whether the scatter in the fatigue data is due to variability in raw material and/or the manufacturing process, additional testing should be performed on panels manufactured from different sources.

  18. The Hubble Catalog of Variables

    NASA Astrophysics Data System (ADS)

    Gavras, P.; Bonanos, A. Z.; Bellas-Velidis, I.; Charmandaris, V.; Georgantopoulos, I.; Hatzidimitriou, D.; Kakaletris, G.; Karampelas, A.; Laskaris, N.; Lennon, D. J.; Moretti, M. I.; Pouliasis, E.; Sokolovsky, K.; Spetsieri, Z. T.; Tsinganos, K.; Whitmore, B. C.; Yang, M.

    2017-06-01

    The Hubble Catalog of Variables (HCV) is a 3 year ESA funded project that aims to develop a set of algorithms to identify variables among the sources included in the Hubble Source Catalog (HSC) and produce the HCV. We will process all HSC sources with more than a predefined number of measurements in a single filter/instrument combination and compute a range of lightcurve features to determine the variability status of each source. At the end of the project, the first release of the Hubble Catalog of Variables will be made available at the Mikulski Archive for Space Telescopes (MAST) and the ESA Science Archives. The variability detection pipeline will be implemented at the Space Telescope Science Institute (STScI) so that updated versions of the HCV may be created following the future releases of the HSC.

  19. Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.

    PubMed

    Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe

    2010-01-01

    There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.

  20. Interplanetary Scintillation studies with the Murchison Wide-field Array III: Comparison of source counts and densities for radio sources and their sub-arcsecond components at 162 MHz

    NASA Astrophysics Data System (ADS)

    Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.

    2018-06-01

    We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.

  1. The Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Evans, Ian; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The first release of the Chandra Source Catalog (CSC) was published in 2009 March, and includes information about 94,676 X-ray sources detected in a subset of public ACIS imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents <˜30''.The CSC is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime.The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports medium sophistication scientific analysis on using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools; and (4) includes real X-ray sources detected with flux significance greater than a predefined threshold, while maintaining the number of spurious sources at an acceptable level. For each detected X-ray source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.

  2. Prognostic grouping of metastatic prostate cancer using conventional pretreatment prognostic factors.

    PubMed

    Mikkola, Arto; Aro, Jussi; Rannikko, Sakari; Ruutu, Mirja

    2009-01-01

    To develop three prognostic groups for disease specific mortality based on the binary classified pretreatment variables age, haemoglobin concentration (Hb), erythrocyte sedimentation rate (ESR), alkaline phosphatase (ALP), prostate-specific antigen (PSA), plasma testosterone and estradiol level in hormonally treated patients with metastatic prostate cancer (PCa). The present study comprised 200 Finnprostate 6 study patients, but data on all variables were not known for every patient. The patients were divided into three prognostic risk groups (Rgs) using the prognostically best set of pretreatment variables. The best set was found by backward stepwise selection and the effect of every excluded variable on the binary classification cut-off points of the remaining variables was checked and corrected when needed. The best group of variables was ALP, PSA, ESR and age. All data were known in 142 patients. Patients were given one risk point each for ALP > 180 U/l (normal value 60-275 U/l), PSA > 35 microg/l, ESR > 80 mm/h and age < 60 years. Three risk groups were formed: Rg-a (0-1 risk points), Rg-b (2 risk points) and Rg-c (3-4 risk points). The risk of death from PCa increased statistically significantly with advancing prognostic group. Patients with metastatic PCa can be divided into three statistically significantly different prognostic risk groups for PCa-specific mortality by using the binary classified pretreatment variables ALP, PSA, ESR and age.

  3. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aartsen, M. G.; Abraham, K.; Ackermann, M.

    Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less

  5. Variable pressure power cycle and control system

    DOEpatents

    Goldsberry, Fred L.

    1984-11-27

    A variable pressure power cycle and control system that is adjustable to a variable heat source is disclosed. The power cycle adjusts itself to the heat source so that a minimal temperature difference is maintained between the heat source fluid and the power cycle working fluid, thereby substantially matching the thermodynamic envelope of the power cycle to the thermodynamic envelope of the heat source. Adjustments are made by sensing the inlet temperature of the heat source fluid and then setting a superheated vapor temperature and pressure to achieve a minimum temperature difference between the heat source fluid and the working fluid.

  6. Development of episodic and autobiographical memory: The importance of remembering forgetting

    PubMed Central

    Bauer, Patricia J.

    2015-01-01

    Some memories of the events of our lives have a long shelf-life—they remain accessible to recollection even after long delays. Yet many other of our experiences are forgotten, sometimes very soon after they take place. In spite of the prevalence of forgetting, theories of the development of episodic and autobiographical memory largely ignore it as a potential source of variance in explanation of age-related variability in long-term recall. They focus instead on what may be viewed as positive developmental changes, that is, changes that result in improvements in the quality of memory representations that are formed. The purpose of this review is to highlight the role of forgetting as an important variable in understanding the development of episodic and autobiographical memory. Forgetting processes are implicated as a source of variability in long-term recall due to the protracted course of development of the neural substrate responsible for transformation of fleeting experiences into memory traces that can be integrated into long-term stores and retrieved at later points in time. It is logical to assume that while the substrate is developing, neural processing is relatively inefficient and ineffective, resulting in loss of information from memory (i.e., forgetting). For this reason, focus on developmental increases in the quality of representations of past events and experiences will tell only a part of the story of how memory develops. A more complete account is afforded when we also consider changes in forgetting. PMID:26644633

  7. Silica exposure during construction activities: statistical modeling of task-based measurements from the literature.

    PubMed

    Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme

    2013-05-01

    Many construction activities can put workers at risk of breathing silica containing dusts, and there is an important body of literature documenting exposure levels using a task-based strategy. In this study, statistical modeling was used to analyze a data set containing 1466 task-based, personal respirable crystalline silica (RCS) measurements gathered from 46 sources to estimate exposure levels during construction tasks and the effects of determinants of exposure. Monte-Carlo simulation was used to recreate individual exposures from summary parameters, and the statistical modeling involved multimodel inference with Tobit models containing combinations of the following exposure variables: sampling year, sampling duration, construction sector, project type, workspace, ventilation, and controls. Exposure levels by task were predicted based on the median reported duration by activity, the year 1998, absence of source control methods, and an equal distribution of the other determinants of exposure. The model containing all the variables explained 60% of the variability and was identified as the best approximating model. Of the 27 tasks contained in the data set, abrasive blasting, masonry chipping, scabbling concrete, tuck pointing, and tunnel boring had estimated geometric means above 0.1mg m(-3) based on the exposure scenario developed. Water-fed tools and local exhaust ventilation were associated with a reduction of 71 and 69% in exposure levels compared with no controls, respectively. The predictive model developed can be used to estimate RCS concentrations for many construction activities in a wide range of circumstances.

  8. Quality-by-Design approach to monitor the operation of a batch bioreactor in an industrial avian vaccine manufacturing process.

    PubMed

    Largoni, Martina; Facco, Pierantonio; Bernini, Donatella; Bezzo, Fabrizio; Barolo, Massimiliano

    2015-10-10

    Monitoring batch bioreactors is a complex task, due to the fact that several sources of variability can affect a running batch and impact on the final product quality. Additionally, the product quality itself may not be measurable on line, but requires sampling and lab analysis taking several days to be completed. In this study we show that, by using appropriate process analytical technology tools, the operation of an industrial batch bioreactor used in avian vaccine manufacturing can be effectively monitored as the batch progresses. Multivariate statistical models are built from historical databases of batches already completed, and they are used to enable the real time identification of the variability sources, to reliably predict the final product quality, and to improve process understanding, paving the way to a reduction of final product rejections, as well as to a reduction of the product cycle time. It is also shown that the product quality "builds up" mainly during the first half of a batch, suggesting on the one side that reducing the variability during this period is crucial, and on the other side that the batch length can possibly be shortened. Overall, the study demonstrates that, by using a Quality-by-Design approach centered on the appropriate use of mathematical modeling, quality can indeed be built "by design" into the final product, whereas the role of end-point product testing can progressively reduce its importance in product manufacturing. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. The XMM deep survey in the CDF-S. X. X-ray variability of bright sources

    NASA Astrophysics Data System (ADS)

    Falocco, S.; Paolillo, M.; Comastri, A.; Carrera, F. J.; Ranalli, P.; Iwasawa, K.; Georgantopoulos, I.; Vignali, C.; Gilli, R.

    2017-12-01

    Aims: We aim to study the variability properties of bright hard X-ray selected active galactic nuclei (AGN) in the redshift range between 0.3 and 1.6 detected in the Chandra Deep Field South (XMM-CDFS) by a long ( 3 Ms) XMM observation. Methods: Taking advantage of the good count statistics in the XMM CDFS, we search for flux and spectral variability using the hardness ratio (HR) techniques. We also investigate the spectral variability of different spectral components (photon index of the power law, column density of the local absorber, and reflection intensity). The spectra were merged in six epochs (defined as adjacent observations) and in high and low flux states to understand whether the flux transitions are accompanied by spectral changes. Results: The flux variability is significant in all the sources investigated. The HRs in general are not as variable as the fluxes, in line with previous results on deep fields. Only one source displays a variable HR, anti-correlated with the flux (source 337). The spectral analysis in the available epochs confirms the steeper when brighter trend consistent with Comptonisation models only in this source at 99% confidence level. Finding this trend in one out of seven unabsorbed sources is consistent, within the statistical limits, with the 15% of unabsorbed AGN in previous deep surveys. No significant variability in the column densities, nor in the Compton reflection component, has been detected across the epochs considered. The high and low states display in general different normalisations but consistent spectral properties. Conclusions: X-ray flux fluctuations are ubiquitous in AGN, though in some cases the data quality does not allow for their detection. In general, the significant flux variations are not associated with spectral variability: photon index and column densities are not significantly variable in nine out of the ten AGN over long timescales (from three to six and a half years). Photon index variability is found only in one source (which is steeper when brighter) out of seven unabsorbed AGN. The percentage of spectrally variable objects is consistent, within the limited statistics of sources studied here, with previous deep samples.

  10. Optical polarization of high-energy BL Lacertae objects

    NASA Astrophysics Data System (ADS)

    Hovatta, T.; Lindfors, E.; Blinov, D.; Pavlidou, V.; Nilsson, K.; Kiehlmann, S.; Angelakis, E.; Fallah Ramazani, V.; Liodakis, I.; Myserlis, I.; Panopoulou, G. V.; Pursimo, T.

    2016-12-01

    Context. We investigate the optical polarization properties of high-energy BL Lac objects using data from the RoboPol blazar monitoring program and the Nordic Optical Telescope. Aims: We wish to understand if there are differences between the BL Lac objects that have been detected with the current-generation TeV instruments and those objects that have not yet been detected. Methods: We used a maximum-likelihood method to investigate the optical polarization fraction and its variability in these sources. In order to study the polarization position angle variability, we calculated the time derivative of the electric vector position angle (EVPA) change. We also studied the spread in the Stokes Q/I-U/I plane and rotations in the polarization plane. Results: The mean polarization fraction of the TeV-detected BL Lacs is 5%, while the non-TeV sources show a higher mean polarization fraction of 7%. This difference in polarization fraction disappears when the dilution by the unpolarized light of the host galaxy is accounted for. The TeV sources show somewhat lower fractional polarization variability amplitudes than the non-TeV sources. Also the fraction of sources with a smaller spread in the Q/I-U/I plane and a clumped distribution of points away from the origin, possibly indicating a preferred polarization angle, is larger in the TeV than in the non-TeV sources. These differences between TeV and non-TeV samples seem to arise from differences between intermediate and high spectral peaking sources instead of the TeV detection. When the EVPA variations are studied, the rate of EVPA change is similar in both samples. We detect significant EVPA rotations in both TeV and non-TeV sources, showing that rotations can occur in high spectral peaking BL Lac objects when the monitoring cadence is dense enough. Our simulations show that we cannot exclude a random walk origin for these rotations. Conclusions: These results indicate that there are no intrinsic differences in the polarization properties of the TeV-detected and non-TeV-detected high-energy BL Lac objects. This suggests that the polarization properties are not directly related to the TeV-detection, but instead the TeV loudness is connected to the general flaring activity, redshift, and the synchrotron peak location. The polarization curve data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A78

  11. DISCRIMINATION OF NATURAL AND NON-POINT SOURCE EFFECTS FROM ANTHROGENIC EFFECTS AS REFLECTED IN BENTHIC STATE IN THREE ESTUARIES IN NEW ENGLAND

    EPA Science Inventory

    In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...

  12. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  13. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  14. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  15. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  16. 40 CFR 409.13 - Effluent limitations guidelines representing the degree of effluent reduction attainable by the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...

  17. Sloan Digital Sky Survey IV: Mapping the Milky Way, nearby galaxies, and the distant universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela

    Here, we describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (medianmore » $$z\\sim 0.03$$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $$z\\sim 0.6$$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.« less

  18. Sloan Digital Sky Survey IV: Mapping the Milky Way, nearby galaxies, and the distant universe

    DOE PAGES

    Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela; ...

    2017-06-29

    Here, we describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (medianmore » $$z\\sim 0.03$$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $$z\\sim 0.6$$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.« less

  19. Sloan Digital Sky Survey IV: Mapping the Milky Way, Nearby Galaxies, and the Distant Universe

    NASA Astrophysics Data System (ADS)

    Blanton, Michael R.; Bershady, Matthew A.; Abolfathi, Bela; Albareti, Franco D.; Allende Prieto, Carlos; Almeida, Andres; Alonso-García, Javier; Anders, Friedrich; Anderson, Scott F.; Andrews, Brett; Aquino-Ortíz, Erik; Aragón-Salamanca, Alfonso; Argudo-Fernández, Maria; Armengaud, Eric; Aubourg, Eric; Avila-Reese, Vladimir; Badenes, Carles; Bailey, Stephen; Barger, Kathleen A.; Barrera-Ballesteros, Jorge; Bartosz, Curtis; Bates, Dominic; Baumgarten, Falk; Bautista, Julian; Beaton, Rachael; Beers, Timothy C.; Belfiore, Francesco; Bender, Chad F.; Berlind, Andreas A.; Bernardi, Mariangela; Beutler, Florian; Bird, Jonathan C.; Bizyaev, Dmitry; Blanc, Guillermo A.; Blomqvist, Michael; Bolton, Adam S.; Boquien, Médéric; Borissova, Jura; van den Bosch, Remco; Bovy, Jo; Brandt, William N.; Brinkmann, Jonathan; Brownstein, Joel R.; Bundy, Kevin; Burgasser, Adam J.; Burtin, Etienne; Busca, Nicolás G.; Cappellari, Michele; Delgado Carigi, Maria Leticia; Carlberg, Joleen K.; Carnero Rosell, Aurelio; Carrera, Ricardo; Chanover, Nancy J.; Cherinka, Brian; Cheung, Edmond; Gómez Maqueo Chew, Yilen; Chiappini, Cristina; Doohyun Choi, Peter; Chojnowski, Drew; Chuang, Chia-Hsun; Chung, Haeun; Cirolini, Rafael Fernando; Clerc, Nicolas; Cohen, Roger E.; Comparat, Johan; da Costa, Luiz; Cousinou, Marie-Claude; Covey, Kevin; Crane, Jeffrey D.; Croft, Rupert A. C.; Cruz-Gonzalez, Irene; Garrido Cuadra, Daniel; Cunha, Katia; Damke, Guillermo J.; Darling, Jeremy; Davies, Roger; Dawson, Kyle; de la Macorra, Axel; Dell'Agli, Flavia; De Lee, Nathan; Delubac, Timothée; Di Mille, Francesco; Diamond-Stanic, Aleks; Cano-Díaz, Mariana; Donor, John; Downes, Juan José; Drory, Niv; du Mas des Bourboux, Hélion; Duckworth, Christopher J.; Dwelly, Tom; Dyer, Jamie; Ebelke, Garrett; Eigenbrot, Arthur D.; Eisenstein, Daniel J.; Emsellem, Eric; Eracleous, Mike; Escoffier, Stephanie; Evans, Michael L.; Fan, Xiaohui; Fernández-Alvar, Emma; Fernandez-Trincado, J. G.; Feuillet, Diane K.; Finoguenov, Alexis; Fleming, Scott W.; Font-Ribera, Andreu; Fredrickson, Alexander; Freischlad, Gordon; Frinchaboy, Peter M.; Fuentes, Carla E.; Galbany, Lluís; Garcia-Dias, R.; García-Hernández, D. A.; Gaulme, Patrick; Geisler, Doug; Gelfand, Joseph D.; Gil-Marín, Héctor; Gillespie, Bruce A.; Goddard, Daniel; Gonzalez-Perez, Violeta; Grabowski, Kathleen; Green, Paul J.; Grier, Catherine J.; Gunn, James E.; Guo, Hong; Guy, Julien; Hagen, Alex; Hahn, ChangHoon; Hall, Matthew; Harding, Paul; Hasselquist, Sten; Hawley, Suzanne L.; Hearty, Fred; Gonzalez Hernández, Jonay I.; Ho, Shirley; Hogg, David W.; Holley-Bockelmann, Kelly; Holtzman, Jon A.; Holzer, Parker H.; Huehnerhoff, Joseph; Hutchinson, Timothy A.; Hwang, Ho Seong; Ibarra-Medel, Héctor J.; da Silva Ilha, Gabriele; Ivans, Inese I.; Ivory, KeShawn; Jackson, Kelly; Jensen, Trey W.; Johnson, Jennifer A.; Jones, Amy; Jönsson, Henrik; Jullo, Eric; Kamble, Vikrant; Kinemuchi, Karen; Kirkby, David; Kitaura, Francisco-Shu; Klaene, Mark; Knapp, Gillian R.; Kneib, Jean-Paul; Kollmeier, Juna A.; Lacerna, Ivan; Lane, Richard R.; Lang, Dustin; Law, David R.; Lazarz, Daniel; Lee, Youngbae; Le Goff, Jean-Marc; Liang, Fu-Heng; Li, Cheng; Li, Hongyu; Lian, Jianhui; Lima, Marcos; Lin, Lihwai; Lin, Yen-Ting; Bertran de Lis, Sara; Liu, Chao; de Icaza Lizaola, Miguel Angel C.; Long, Dan; Lucatello, Sara; Lundgren, Britt; MacDonald, Nicholas K.; Deconto Machado, Alice; MacLeod, Chelsea L.; Mahadevan, Suvrath; Geimba Maia, Marcio Antonio; Maiolino, Roberto; Majewski, Steven R.; Malanushenko, Elena; Malanushenko, Viktor; Manchado, Arturo; Mao, Shude; Maraston, Claudia; Marques-Chaves, Rui; Masseron, Thomas; Masters, Karen L.; McBride, Cameron K.; McDermid, Richard M.; McGrath, Brianne; McGreer, Ian D.; Medina Peña, Nicolás; Melendez, Matthew; Merloni, Andrea; Merrifield, Michael R.; Meszaros, Szabolcs; Meza, Andres; Minchev, Ivan; Minniti, Dante; Miyaji, Takamitsu; More, Surhud; Mulchaey, John; Müller-Sánchez, Francisco; Muna, Demitri; Munoz, Ricardo R.; Myers, Adam D.; Nair, Preethi; Nandra, Kirpal; Correa do Nascimento, Janaina; Negrete, Alenka; Ness, Melissa; Newman, Jeffrey A.; Nichol, Robert C.; Nidever, David L.; Nitschelm, Christian; Ntelis, Pierros; O'Connell, Julia E.; Oelkers, Ryan J.; Oravetz, Audrey; Oravetz, Daniel; Pace, Zach; Padilla, Nelson; Palanque-Delabrouille, Nathalie; Alonso Palicio, Pedro; Pan, Kaike; Parejko, John K.; Parikh, Taniya; Pâris, Isabelle; Park, Changbom; Patten, Alim Y.; Peirani, Sebastien; Pellejero-Ibanez, Marcos; Penny, Samantha; Percival, Will J.; Perez-Fournon, Ismael; Petitjean, Patrick; Pieri, Matthew M.; Pinsonneault, Marc; Pisani, Alice; Poleski, Radosław; Prada, Francisco; Prakash, Abhishek; Queiroz, Anna Bárbara de Andrade; Raddick, M. Jordan; Raichoor, Anand; Barboza Rembold, Sandro; Richstein, Hannah; Riffel, Rogemar A.; Riffel, Rogério; Rix, Hans-Walter; Robin, Annie C.; Rockosi, Constance M.; Rodríguez-Torres, Sergio; Roman-Lopes, A.; Román-Zúñiga, Carlos; Rosado, Margarita; Ross, Ashley J.; Rossi, Graziano; Ruan, John; Ruggeri, Rossana; Rykoff, Eli S.; Salazar-Albornoz, Salvador; Salvato, Mara; Sánchez, Ariel G.; Aguado, D. S.; Sánchez-Gallego, José R.; Santana, Felipe A.; Santiago, Basílio Xavier; Sayres, Conor; Schiavon, Ricardo P.; da Silva Schimoia, Jaderson; Schlafly, Edward F.; Schlegel, David J.; Schneider, Donald P.; Schultheis, Mathias; Schuster, William J.; Schwope, Axel; Seo, Hee-Jong; Shao, Zhengyi; Shen, Shiyin; Shetrone, Matthew; Shull, Michael; Simon, Joshua D.; Skinner, Danielle; Skrutskie, M. F.; Slosar, Anže; Smith, Verne V.; Sobeck, Jennifer S.; Sobreira, Flavia; Somers, Garrett; Souto, Diogo; Stark, David V.; Stassun, Keivan; Stauffer, Fritz; Steinmetz, Matthias; Storchi-Bergmann, Thaisa; Streblyanska, Alina; Stringfellow, Guy S.; Suárez, Genaro; Sun, Jing; Suzuki, Nao; Szigeti, Laszlo; Taghizadeh-Popp, Manuchehr; Tang, Baitian; Tao, Charling; Tayar, Jamie; Tembe, Mita; Teske, Johanna; Thakar, Aniruddha R.; Thomas, Daniel; Thompson, Benjamin A.; Tinker, Jeremy L.; Tissera, Patricia; Tojeiro, Rita; Hernandez Toledo, Hector; de la Torre, Sylvain; Tremonti, Christy; Troup, Nicholas W.; Valenzuela, Octavio; Martinez Valpuesta, Inma; Vargas-González, Jaime; Vargas-Magaña, Mariana; Vazquez, Jose Alberto; Villanova, Sandro; Vivek, M.; Vogt, Nicole; Wake, David; Walterbos, Rene; Wang, Yuting; Weaver, Benjamin Alan; Weijmans, Anne-Marie; Weinberg, David H.; Westfall, Kyle B.; Whelan, David G.; Wild, Vivienne; Wilson, John; Wood-Vasey, W. M.; Wylezalek, Dominika; Xiao, Ting; Yan, Renbin; Yang, Meng; Ybarra, Jason E.; Yèche, Christophe; Zakamska, Nadia; Zamora, Olga; Zarrouk, Pauline; Zasowski, Gail; Zhang, Kai; Zhao, Gong-Bo; Zheng, Zheng; Zheng, Zheng; Zhou, Xu; Zhou, Zhi-Min; Zhu, Guangtun B.; Zoccali, Manuela; Zou, Hu

    2017-07-01

    We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (median z˜ 0.03). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between z˜ 0.6 and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.

  20. Occurrence of Surface Water Contaminations: An Overview

    NASA Astrophysics Data System (ADS)

    Shahabudin, M. M.; Musa, S.

    2018-04-01

    Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.

  1. Extreme Ultraviolet Explorer observations of the magnetic cataclysmic variable RE 1938-461

    NASA Technical Reports Server (NTRS)

    Warren, John K.; Vallerga, John V.; Mauche, Christopher W.; Mukai, Koji; Siegmund, Oswald H. W.

    1993-01-01

    The magnetic cataclysmic variable RE 1938-461 was observed by the Extreme Ultraviolet Explorer (EUVE) Deep Survey instrument on 1992 July 8-9 during in-orbit calibration. It was detected in the Lexan/ boron (65-190 A) band, with a quiescent count rate of 0.0062 +/- 0.0017/s, and was not detected in the aluminum/carbon (160-360 A) band. The Lexan/boron count rate is lower than the corresponding ROSAT wide-field camera Lexan/boron count rate. This is consistent with the fact that the source was in a low state during an optical observation performed just after the EUVE observation, whereas it was in an optical high state during the ROSAT observation. The quiescent count rates are consistent with a virtual cessation of accretion. Two transient events lasting about 1 hr occurred during the Lexan/boron pointing, the second at a count rate of 0.050 +/- 0.006/s. This appears to be the first detection of an EUV transient during the low state of a magnetic cataclysmic variable. We propose two possible explanations for the transient events.

  2. Reproducibility and variability of quantitative magnetic resonance imaging markers in cerebral small vessel disease

    PubMed Central

    De Guio, François; Jouvent, Eric; Biessels, Geert Jan; Black, Sandra E; Brayne, Carol; Chen, Christopher; Cordonnier, Charlotte; De Leeuw, Frank-Eric; Dichgans, Martin; Doubal, Fergus; Duering, Marco; Dufouil, Carole; Duzel, Emrah; Fazekas, Franz; Hachinski, Vladimir; Ikram, M Arfan; Linn, Jennifer; Matthews, Paul M; Mazoyer, Bernard; Mok, Vincent; Norrving, Bo; O’Brien, John T; Pantoni, Leonardo; Ropele, Stefan; Sachdev, Perminder; Schmidt, Reinhold; Seshadri, Sudha; Smith, Eric E; Sposato, Luciano A; Stephan, Blossom; Swartz, Richard H; Tzourio, Christophe; van Buchem, Mark; van der Lugt, Aad; van Oostenbrugge, Robert; Vernooij, Meike W; Viswanathan, Anand; Werring, David; Wollenweber, Frank; Wardlaw, Joanna M

    2016-01-01

    Brain imaging is essential for the diagnosis and characterization of cerebral small vessel disease. Several magnetic resonance imaging markers have therefore emerged, providing new information on the diagnosis, progression, and mechanisms of small vessel disease. Yet, the reproducibility of these small vessel disease markers has received little attention despite being widely used in cross-sectional and longitudinal studies. This review focuses on the main small vessel disease-related markers on magnetic resonance imaging including: white matter hyperintensities, lacunes, dilated perivascular spaces, microbleeds, and brain volume. The aim is to summarize, for each marker, what is currently known about: (1) its reproducibility in studies with a scan–rescan procedure either in single or multicenter settings; (2) the acquisition-related sources of variability; and, (3) the techniques used to minimize this variability. Based on the results, we discuss technical and other challenges that need to be overcome in order for these markers to be reliably used as outcome measures in future clinical trials. We also highlight the key points that need to be considered when designing multicenter magnetic resonance imaging studies of small vessel disease. PMID:27170700

  3. Dipole sources of the human alpha rhythm.

    PubMed

    Rodin, E A; Rodin, M J

    1995-01-01

    Dipole sources were investigated in 22 normal subjects with a variety of strategies available through the BESA program. When all the data were summed one regional source, located near the midline in the basal portions of the occipital lobe, explained 92% of the variance. Two regional sources, initially constrained for symmetry but subsequently freed from constraint placed them also in the occipital regions near the midline and reduced the residual variance to 4%. Pooled data obscure, however, the marked individual differences especially in regard to lateralization. In the individual case the major source was also always in one occipital area but its location, especially the degree of separation from the midline depended upon alpha distribution and the strategy used in the workup of the data. The orientation of the major components of the regional sources was usually in the posterior-anterior direction, fairly parallel to the midline and while the other one pointed to the upper convexity. Because of the considerable variability of the alpha rhythm in given subjects and even within the same individual a model which requires symmetry constraints is not optimal for all instances, even when constraints are lifted thereafter. The study demonstrated the feasibility of distinguishing predominantly mesial sources from those which are bihemipheric with more lateral origins but several different models may have to be used to reach the most realistic conclusions.

  4. 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sturgeon, Richard W.

    This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less

  5. Normal aging reduces motor synergies in manual pointing.

    PubMed

    Verrel, Julius; Lövdén, Martin; Lindenberger, Ulman

    2012-01-01

    Depending upon its organization, movement variability may reflect poor or flexible control of a motor task. We studied adult age-related differences in the structure of postural variability in manual pointing using the uncontrolled manifold (UCM) method. Participants from 2 age groups (younger: 20-30 years; older: 70-80 years; 12 subjects per group) completed a total of 120 pointing trials to 2 different targets presented according to 3 schedules: blocked, alternating, and random. The age groups were similar with respect to basic kinematic variables, end point precision, as well as the accuracy of the biomechanical forward model of the arm. Following the uncontrolled manifold approach, goal-equivalent and nongoal-equivalent components of postural variability (goal-equivalent variability [GEV] and nongoal-equivalent variability [NGEV]) were determined for 5 time points of the movements (start, 10%, 50%, 90%, and end) and used to define a synergy index reflecting the flexibility/stability aspect of motor synergies. Toward the end of the movement, younger adults showed higher synergy indexes than older adults. Effects of target schedule were not reliable. We conclude that normal aging alters the organization of common multidegree-of-freedom movements, with older adults making less flexible use of motor abundance than younger adults. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. The Chandra Source Catalog 2.0: Data Processing Pipelines

    NASA Astrophysics Data System (ADS)

    Miller, Joseph; Allen, Christopher E.; Budynkiewicz, Jamie A.; Gibbs, Danny G., II; Paxson, Charles; Chen, Judy C.; Anderson, Craig S.; Burke, Douglas; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula

    2018-01-01

    With the construction of the Second Chandra Source Catalog (CSC2.0), came new requirements and new techniques to create a software system that can process 10,000 observations and identify nearly 320,000 point and compact X-ray sources. A new series of processing pipelines have been developed to allow for deeper more complete exploration of the Chanda observations. In CSC1.0 there were 4 general pipelines, whereas in CSC2.0 there are 20 data processing pipelines that have been organized into 3 distinct phases of operation - detection, master matching and source property characterization.With CSC2.0, observations within one arcminute of each other are stacked before searching for sources. The detection phase of processing combines the data, adjusts for shifts in fine astrometry, detects sources, and assesses the likelihood that sources are real. During the master source phase, detections across stacks of observations are analyzed for coverage of the same source to produce a master source. Finally, in the source property phase, each source is characterized with aperture photometry, spectrometry, variability and other properties at theobservation, stack and master levels over several energy bands.We present how these pipelines were constructed and the challenges we faced in how we processed data ranging from virtually no counts to millions of counts, how pipelines were tuned to work optimally on a computational cluster, and how we ensure the data produced was correct through various quality assurance steps.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  7. YSOVAR: MID-INFRARED VARIABILITY OF YOUNG STELLAR OBJECTS AND THEIR DISKS IN THE CLUSTER IRAS 20050+2720

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppenhaeger, K.; Wolk, S. J.; Hora, J. L.

    2015-10-15

    We present a time-variability study of young stellar objects (YSOs) in the cluster IRAS 20050+2720, performed at 3.6 and 4.5 μm with the Spitzer Space Telescope; this study is part of the Young Stellar Object VARiability (YSOVAR) project. We have collected light curves for 181 cluster members over 60 days. We find a high variability fraction among embedded cluster members of ca. 70%, whereas young stars without a detectable disk display variability less often (in ca. 50% of the cases) and with lower amplitudes. We detect periodic variability for 33 sources with periods primarily in the range of 2–6 days.more » Practically all embedded periodic sources display additional variability on top of their periodicity. Furthermore, we analyze the slopes of the tracks that our sources span in the color–magnitude diagram (CMD). We find that sources with long variability time scales tend to display CMD slopes that are at least partially influenced by accretion processes, while sources with short variability timescales tend to display extinction-dominated slopes. We find a tentative trend of X-ray detected cluster members to vary on longer timescales than the X-ray undetected members.« less

  8. A new dust source map of Central Asia derived from MODIS Terra/Aqua data using dust enhancement techniques

    NASA Astrophysics Data System (ADS)

    Nobakht, Mohamad; Shahgedanova, Maria; White, Kevin

    2017-04-01

    Central Asian deserts are a significant source of dust in the middle latitudes, where economic activity and health of millions of people are affected by dust storms. Detailed knowledge of sources of dust, controls over their activity, seasonality and atmospheric pathways are of crucial importance but to date, these data are limited. This paper presents a detailed database of sources of dust emissions in Central Asia, from western China to the Caspian Sea, obtained from the analysis of the Moderate Resolution Imaging Spectroradiometer (MODIS) data between 2003 and 2012. A dust enhancement algorithm was employed to obtain two composite images per day at 1 km resolution from MODIS Terra/Aqua acquisitions, from which dust point sources (DPS) were detected by visual analysis and recorded in a database together with meteorological variables at each DPS location. Spatial analysis of DPS has revealed several active source regions, including some which were not widely discussed in literature before (e.g. Northern Afghanistan sources, Betpak-Dala region in western Kazakhstan). Investigation of land surface characteristics and meteorological conditions at each source region revealed mechanisms for the formation of dust sources, including post-fire wind erosion (e.g. Lake Balkhash basin) and rapid desertification (e.g. the Aral Sea). Different seasonal patterns of dust emissions were observed as well as inter-annual trends. The most notable feature was an increase in dust activity in the Aral Kum.

  9. Cardiorespiratory interactions in humans and animals: Rhythms for life.

    PubMed

    Elstad, Maja; O'Callaghan, Erin L; Smith, Alexander J; Ben-Tal, A; Ramchandra, Rohit

    2018-03-09

    The cardiorespiratory system exhibits oscillations from a range of sources. One of the most studied oscillations is heart rate variability, which is thought to be beneficial and can serve as an index of a healthy cardiovascular system. Heart rate variability is dampened in many diseases including depression, autoimmune diseases, hypertension and heart failure. Thus, understanding the interactions that lead to heart rate variability, and its physiological role, could help with prevention, diagnosis and treatment of cardiovascular diseases. In this review we consider three types of cardiorespiratory interactions; Respiratory Sinus Arrhythmia - variability in heart rate at the frequency of breathing, Cardioventilatory Coupling - synchronization between the heart beat and the onset of inspiration, and Respiratory Stroke Volume Synchronization - constant phase difference between the right and the left stroke volumes over one respiratory cycle. While the exact physiological role of these oscillations continues to be debated, the redundancies in the mechanisms responsible for its generation and its strong evolutionary conservation point to the importance of cardiorespiratory interactions. The putative mechanisms driving cardiorespiratory oscillations as well as the physiological significance of these oscillations will be reviewed. We suggest that cardiorespiratory interactions have the capacity to both dampen the variability in systemic blood flow as well as improve the efficiency of work done by the heart while maintaining physiological levels of arterial CO 2 . Given that reduction in variability is a prognostic indicator of disease, we argue that restoration of this variability via pharmaceutical or device-based approaches may be beneficial in prolonging life.

  10. Organic Matter Remineralization Predominates Phosphorus Cycling in the Mid-Bay Sediments in the Chesapeake Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunendra, Joshi R.; Kukkadapu, Ravi K.; Burdige, David J.

    2015-05-19

    The Chesapeake Bay, the largest and most productive estuary in the US, suffers from varying degrees of water quality issues fueled by both point and non–point source nutrient sources. Restoration of the bay is complicated by the multitude of nutrient sources, their variable inputs and hydrological conditions, and complex interacting factors including climate forcing. These complexities not only restrict formulation of effective restoration plans but also open up debates on accountability issues with nutrient loading. A detailed understanding of sediment phosphorus (P) dynamics enables one to identify the exchange of dissolved constituents across the sediment- water interface and aid tomore » better constrain mechanisms and processes controlling the coupling between the sediments and the overlying waters. Here we used phosphate oxygen isotope ratios (δ18Op) in concert with sediment chemistry, XRD, and Mössbauer spectroscopy on the sediment retrieved from an organic rich, sulfidic site in the meso-haline portion of the mid-bay to identify sources and pathway of sedimentary P cycling and to infer potential feedback effect on bottom water hypoxia and surface water eutrophication. Isotope data indicate that the regeneration of inorganic P from organic matter degradation (remineralization) is the predominant, if not sole, pathway for authigenic P precipitation in the mid-bay sediments. We interpret that the excess inorganic P generated by remineralization should have overwhelmed any bottom-water and/or pore-water P derived from other sources or biogeochemical processes and exceeded saturation with respect to authigenic P precipitation. It is the first research that identifies the predominance of remineralization pathway against remobilization (coupled Fe-P cycling) pathway in the Chesapeake Bay. Therefore, these results are expected to have significant implications for the current understanding of P cycling and benthic-pelagic coupling in the bay, particularly on the source and pathway of P that sustains hypoxia and supports phytoplankton growth in the surface water.« less

  11. The near-infrared counterpart of a variable galactic plane radio source

    NASA Technical Reports Server (NTRS)

    Margon, Bruce; Phillips, Andrew C.; Ciardullo, Robin; Jacoby, George H.

    1992-01-01

    A near-infrared counterpart to the highly variable, unresolved galactic plane radio source GT 0116 + 622 is identified. This source is of particular interest, as it has been previously suggested to be the counterpart of the gamma-ray source Cas gamma-l. The present NIR and red images detect a faint, spatially extended (3 arcsec FWHM), very red object coincident with the radio position. There is complex spatial structure which may be due in part to an unrelated superposed foreground object. Observations on multiple nights show no evidence for flux variability, despite the high amplitude variability on a time-scale of days reported for the radio source. The data are consistent with an interpretation of GT 0116 + 622 as an unusually variable, obscured active galaxy at a distance of several hundred megaparsecs, although more exotic, and in particular galactic, interpretations cannot yet be ruled out. If the object is extragalactic, the previously suggested identification with the gamma-ray source would seem unlikely.

  12. FRB as products of accretion disc funnels

    NASA Astrophysics Data System (ADS)

    Katz, J. I.

    2017-10-01

    The repeating FRB 121102, the only fast radio burst (FRB) with an accurately determined position, is associated with a variable persistent radio source. I suggest that an FRB originates in the accretion disc funnels of black holes. Narrowly collimated radiation is emitted along the wandering instantaneous angular momentum axis of accreted matter. This emission is observed as a fast radio burst when it sweeps across the direction to the observer. In this model, in contrast to neutron star (pulsar, RRAT or SGR) models, repeating FRBs do not have underlying periodicity and are co-located with persistent radio sources resulting from their off-axis emission. The model is analogous, on smaller spatial, lower mass and accretion rate and shorter temporal scales, to an active galactic nucleus (AGN), with FRB corresponding to blazars in which the jets point towards us. The small inferred black hole masses imply that FRBs are not associated with galactic nuclei.

  13. Study of the Correlations and the MAXI Hardness Ratio between the Anomalous and Normal Low States of LMC X-3

    NASA Astrophysics Data System (ADS)

    Torpin, Trevor; Boyd, Patricia T.; Smale, Alan P.

    2015-01-01

    The bright, unusual black-hole X-ray binary LMC X-3 has been monitored virtually continuously by the Japanese MAXI X-ray All-Sky Monitor aboard the International Space Station (Matsuoka, et al., PASJ, 2009) from August 2009 to the present. Comparison with RXTE PCA and ASM light curves during the ~2.33-year period of overlap demonstrate that despite slight differences in energy-band boundaries both the ASM and MAXI faithfully reproduce characteristics of the high-amplitude, nonperiodic long-term variability, on the order of 100-300 days, clearly seen in the more sensitive PCA monitoring. The mechanism for this variability at a timescale many times longer than the 1.7-day orbital period is still unknown. Models to explain the long-term variability invoke mechanisms such as changes in mass transfer rate, and/or a precessing warped accretion disk. Observations of LMC X-3 have not definitely determined whether wind accretion or Roche-love overflow is the driver of the long-term variability. Recent MAXI monitoring of LMC X-3 includes excellent coverage of a rare anomalous low state (ALS) where the X-ray source cannot be distinguished from the background, as well as several normal low states, in which the source count rate passes smoothly through a low, yet detectable value. Pointed Swift XRT and UVOT observations also sample this ALS and one normal low state well. We combine these data sets to study the correlations between the wavelength regimes observed during the ALS versus the normal low. We also examine the behavior of the X-ray hardness ratios using XRT and MAXI monitoring data during the ALS versus the normal low state.

  14. Evaluation of energy savings potential of variable refrigerant flow (VRF) from variable air volume (VAV) in the U.S. climate locations

    DOE PAGES

    Kim, Dongsu; Cox, Sam J.; Cho, Heejin; ...

    2017-05-22

    Variable refrigerant flow (VRF) systems are known for their high energy performance and thus can improve energy efficiency both in residential and commercial buildings. The energy savings potential of this system has been demonstrated in several studies by comparing the system performance with conventional HVAC systems such as rooftop variable air volume systems (RTU-VAV) and central chiller and boiler systems. This paper evaluates the performance of VRF and RTU-VAV systems in a simulation environment using widely-accepted whole building energy modeling software, EnergyPlus. A medium office prototype building model, developed by the U.S. Department of Energy (DOE), is used to assessmore » the performance of VRF and RTU-VAV systems. Each system is placed in 16 different locations, representing all U.S. climate zones, to evaluate the performance variations. Both models are compliant with the minimum energy code requirements prescribed in ASHRAE standard 90.1-2010 — energy standard for buildings except low-rise residential buildings. Finally, a comparison study between the simulation results of VRF and RTU-VAV models is made to demonstrate energy savings potential of VRF systems. The simulation results show that the VRF systems would save around 15–42% and 18–33% for HVAC site and source energy uses compared to the RTU-VAV systems. In addition, calculated results for annual HVAC cost savings point out that hot and mild climates show higher percentage cost savings for the VRF systems than cold climates mainly due to the differences in electricity and gas use for heating sources.« less

  15. Evaluation of energy savings potential of variable refrigerant flow (VRF) from variable air volume (VAV) in the U.S. climate locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dongsu; Cox, Sam J.; Cho, Heejin

    Variable refrigerant flow (VRF) systems are known for their high energy performance and thus can improve energy efficiency both in residential and commercial buildings. The energy savings potential of this system has been demonstrated in several studies by comparing the system performance with conventional HVAC systems such as rooftop variable air volume systems (RTU-VAV) and central chiller and boiler systems. This paper evaluates the performance of VRF and RTU-VAV systems in a simulation environment using widely-accepted whole building energy modeling software, EnergyPlus. A medium office prototype building model, developed by the U.S. Department of Energy (DOE), is used to assessmore » the performance of VRF and RTU-VAV systems. Each system is placed in 16 different locations, representing all U.S. climate zones, to evaluate the performance variations. Both models are compliant with the minimum energy code requirements prescribed in ASHRAE standard 90.1-2010 — energy standard for buildings except low-rise residential buildings. Finally, a comparison study between the simulation results of VRF and RTU-VAV models is made to demonstrate energy savings potential of VRF systems. The simulation results show that the VRF systems would save around 15–42% and 18–33% for HVAC site and source energy uses compared to the RTU-VAV systems. In addition, calculated results for annual HVAC cost savings point out that hot and mild climates show higher percentage cost savings for the VRF systems than cold climates mainly due to the differences in electricity and gas use for heating sources.« less

  16. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  17. Near-infrared Variability of Obscured and Unobscured X-Ray-selected AGNs in the COSMOS Field

    NASA Astrophysics Data System (ADS)

    Sánchez, P.; Lira, P.; Cartier, R.; Pérez, V.; Miranda, N.; Yovaniniz, C.; Arévalo, P.; Milvang-Jensen, B.; Fynbo, J.; Dunlop, J.; Coppi, P.; Marchesi, S.

    2017-11-01

    We present our statistical study of near-infrared (NIR) variability of X-ray-selected active galactic nuclei (AGNs) in the COSMOS field, using UltraVISTA data. This is the largest sample of AGN light curves in YJHKs bands, making it possible to have a global description of the nature of AGNs for a large range of redshifts and for different levels of obscuration. To characterize the variability properties of the sources, we computed the structure function. Our results show that there is an anticorrelation between the structure function A parameter (variability amplitude) and the wavelength of emission and a weak anticorrelation between A and the bolometric luminosity. We find that broad-line (BL) AGNs have a considerably larger fraction of variable sources than narrow-line (NL) AGNs and that they have different distributions of the A parameter. We find evidence that suggests that most of the low-luminosity variable NL sources correspond to BL AGNs, where the host galaxy could be damping the variability signal. For high-luminosity variable NL sources, we propose that they can be examples of “true type II” AGNs or BL AGNs with limited spectral coverage, which results in missing the BL emission. We also find that the fraction of variable sources classified as unobscured in the X-ray is smaller than the fraction of variable sources unobscured in the optical range. We present evidence that this is related to the differences in the origin of the obscuration in the optical and X-ray regimes.

  18. Multimodal Imaging and Lighting Bias Correction for Improved μPAD-based Water Quality Monitoring via Smartphones

    NASA Astrophysics Data System (ADS)

    McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol

    2016-06-01

    Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.

  19. Estimation of Groundwater Radon in North Carolina Using Land Use Regression and Bayesian Maximum Entropy.

    PubMed

    Messier, Kyle P; Campbell, Ted; Bradley, Philip J; Serre, Marc L

    2015-08-18

    Radon ((222)Rn) is a naturally occurring chemically inert, colorless, and odorless radioactive gas produced from the decay of uranium ((238)U), which is ubiquitous in rocks and soils worldwide. Exposure to (222)Rn is likely the second leading cause of lung cancer after cigarette smoking via inhalation; however, exposure through untreated groundwater is also a contributing factor to both inhalation and ingestion routes. A land use regression (LUR) model for groundwater (222)Rn with anisotropic geological and (238)U based explanatory variables is developed, which helps elucidate the factors contributing to elevated (222)Rn across North Carolina. The LUR is also integrated into the Bayesian Maximum Entropy (BME) geostatistical framework to increase accuracy and produce a point-level LUR-BME model of groundwater (222)Rn across North Carolina including prediction uncertainty. The LUR-BME model of groundwater (222)Rn results in a leave-one out cross-validation r(2) of 0.46 (Pearson correlation coefficient = 0.68), effectively predicting within the spatial covariance range. Modeled results of (222)Rn concentrations show variability among intrusive felsic geological formations likely due to average bedrock (238)U defined on the basis of overlying stream-sediment (238)U concentrations that is a widely distributed consistently analyzed point-source data.

  20. Regularized linearization for quantum nonlinear optical cavities: application to degenerate optical parametric oscillators.

    PubMed

    Navarrete-Benlloch, Carlos; Roldán, Eugenio; Chang, Yue; Shi, Tao

    2014-10-06

    Nonlinear optical cavities are crucial both in classical and quantum optics; in particular, nowadays optical parametric oscillators are one of the most versatile and tunable sources of coherent light, as well as the sources of the highest quality quantum-correlated light in the continuous variable regime. Being nonlinear systems, they can be driven through critical points in which a solution ceases to exist in favour of a new one, and it is close to these points where quantum correlations are the strongest. The simplest description of such systems consists in writing the quantum fields as the classical part plus some quantum fluctuations, linearizing then the dynamical equations with respect to the latter; however, such an approach breaks down close to critical points, where it provides unphysical predictions such as infinite photon numbers. On the other hand, techniques going beyond the simple linear description become too complicated especially regarding the evaluation of two-time correlators, which are of major importance to compute observables outside the cavity. In this article we provide a regularized linear description of nonlinear cavities, that is, a linearization procedure yielding physical results, taking the degenerate optical parametric oscillator as the guiding example. The method, which we call self-consistent linearization, is shown to be equivalent to a general Gaussian ansatz for the state of the system, and we compare its predictions with those obtained with available exact (or quasi-exact) methods. Apart from its operational value, we believe that our work is valuable also from a fundamental point of view, especially in connection to the question of how far linearized or Gaussian theories can be pushed to describe nonlinear dissipative systems which have access to non-Gaussian states.

  1. Feature Geo Analytics and Big Data Processing: Hybrid Approaches for Earth Science and Real-Time Decision Support

    NASA Astrophysics Data System (ADS)

    Wright, D. J.; Raad, M.; Hoel, E.; Park, M.; Mollenkopf, A.; Trujillo, R.

    2016-12-01

    Introduced is a new approach for processing spatiotemporal big data by leveraging distributed analytics and storage. A suite of temporally-aware analysis tools summarizes data nearby or within variable windows, aggregates points (e.g., for various sensor observations or vessel positions), reconstructs time-enabled points into tracks (e.g., for mapping and visualizing storm tracks), joins features (e.g., to find associations between features based on attributes, spatial relationships, temporal relationships or all three simultaneously), calculates point densities, finds hot spots (e.g., in species distributions), and creates space-time slices and cubes (e.g., in microweather applications with temperature, humidity, and pressure, or within human mobility studies). These "feature geo analytics" tools run in both batch and streaming spatial analysis mode as distributed computations across a cluster of servers on typical "big" data sets, where static data exist in traditional geospatial formats (e.g., shapefile) locally on a disk or file share, attached as static spatiotemporal big data stores, or streamed in near-real-time. In other words, the approach registers large datasets or data stores with ArcGIS Server, then distributes analysis across a cluster of machines for parallel processing. Several brief use cases will be highlighted based on a 16-node server cluster at 14 Gb RAM per node, allowing, for example, the buffering of over 8 million points or thousands of polygons in 1 minute. The approach is "hybrid" in that ArcGIS Server integrates open-source big data frameworks such as Apache Hadoop and Apache Spark on the cluster in order to run the analytics. In addition, the user may devise and connect custom open-source interfaces and tools developed in Python or Python Notebooks; the common denominator being the familiar REST API.

  2. Distribution patterns of mercury in Lakes and Rivers of northeastern North America

    USGS Publications Warehouse

    Dennis, Ian F.; Clair, Thomas A.; Driscoll, Charles T.; Kamman, Neil; Chalmers, Ann T.; Shanley, Jamie; Norton, Stephen A.; Kahl, Steve

    2005-01-01

    We assembled 831 data points for total mercury (Hgt) and 277 overlapping points for methyl mercury (CH3Hg+) in surface waters from Massachussetts, USA to the Island of Newfoundland, Canada from State, Provincial, and Federal government databases. These geographically indexed values were used to determine: (a) if large-scale spatial distribution patterns existed and (b) whether there were significant relationships between the two main forms of aquatic Hg as well as with total organic carbon (TOC), a well know complexer of metals. We analyzed the catchments where samples were collected using a Geographical Information System (GIS) approach, calculating catchment sizes, mean slope, and mean wetness index. Our results show two main spatial distribution patterns. We detected loci of high Hgt values near urbanized regions of Boston MA and Portland ME. However, except for one unexplained exception, the highest Hgt and CH3Hg+ concentrations were located in regions far from obvious point sources. These correlated to topographically flat (and thus wet) areas that we relate to wetland abundances. We show that aquatic Hgt and CH3Hg+ concentrations are generally well correlated with TOC and with each other. Over the region, CH3Hg+ concentrations are typically approximately 15% of Hgt. There is an exception in the Boston region where CH3Hg+ is low compared to the high Hgt values. This is probably due to the proximity of point sources of inorganic Hg and a lack of wetlands. We also attempted to predict Hg concentrations in water with statistical models using catchment features as variables. We were only able to produce statistically significant predictive models in some parts of regions due to the lack of suitable digital information, and because data ranges in some regions were too narrow for meaningful regression analyses.

  3. Temporal and Spatial Variability in the Partitioning and Flux of Riverine Iron Delivered to the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Schroth, A. W.; Crusius, J.; Kroeger, K. D.; Hoyer, I. R.; Osburn, C. L.

    2010-12-01

    Iron (Fe) is a micronutrient that is thought to limit phytoplankton productivity in offshore waters of the Gulf of Alaska (GoA). However, it has been proposed that in coastal regions where offshore, Fe-limited, nitrate-rich waters mix with relatively Fe-rich river plumes, productive ecosystems and fisheries result. Indeed, an observed northward increase in phytoplankton biomass along the pacific coast of North America has been attributed to higher input of riverine Fe to coastal waters, suggesting that many of the coastal ecosystems of the North Pacific rely heavily on this input of Fe as a nutrient source. Based on our studies of the Copper River (the largest point source of freshwater to the GoA) and its tributaries, it is clear that riverine Fe delivered to the GoA is primarily derived from fine glacial flour generated by glacial weathering, which imparts a unique partitioning of Fe species and Fe size fractionation in coastal river plumes. Furthermore, the distribution of Fe species and size fractionation exhibits significant seasonal and spatial variability based on the source of iron within the watershed, which varies from glacial mechanical weathering of bedrock to internal chemical processing in portions of watersheds with forest and wetland land covers. These findings are relevant to our understanding of the GoA biogeochemical system as it exists today and can help to predict how the system may evolve as glaciers within the GoA watershed continue to recede.

  4. The impacts of renewable energy policies on renewable energy sources for electricity generating capacity

    NASA Astrophysics Data System (ADS)

    Koo, Bryan Bonsuk

    Electricity generation from non-hydro renewable sources has increased rapidly in the last decade. For example, Renewable Energy Sources for Electricity (RES-E) generating capacity in the U.S. almost doubled for the last three year from 2009 to 2012. Multiple papers point out that RES-E policies implemented by state governments play a crucial role in increasing RES-E generation or capacity. This study examines the effects of state RES-E policies on state RES-E generating capacity, using a fixed effects model. The research employs panel data from the 50 states and the District of Columbia, for the period 1990 to 2011, and uses a two-stage approach to control endogeneity embedded in the policies adopted by state governments, and a Prais-Winsten estimator to fix any autocorrelation in the panel data. The analysis finds that Renewable Portfolio Standards (RPS) and Net-metering are significantly and positively associated with RES-E generating capacity, but neither Public Benefit Funds nor the Mandatory Green Power Option has a statistically significant relation to RES-E generating capacity. Results of the two-stage model are quite different from models which do not employ predicted policy variables. Analysis using non-predicted variables finds that RPS and Net-metering policy are statistically insignificant and negatively associated with RES-E generating capacity. On the other hand, Green Energy Purchasing policy is insignificant in the two-stage model, but significant in the model without predicted values.

  5. Point and Condensed Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian

    2017-01-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.

  6. Understanding Hydrological Processes in Variable Source Areas in the Glaciated Northeastern US Watersheds under Variable Climate Conditions

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Azzaino, Z.; Hoang, L.; Pacenka, S.; Worqlul, A. W.; Mukundan, R.; Stoof, C.; Owens, E. M.; Richards, B. K.

    2017-12-01

    The New York City source watersheds in the Catskill Mountains' humid, temperate climate has long-term hydrological and water quality monitoring data It is one of the few catchments where implementation of source and landscape management practices has led to decreased phosphorus concentration in the receiving surface waters. One of the reasons is that landscape measures correctly targeted the saturated variable source runoff areas (VSA) in the valley bottoms as the location where most of the runoff and other nonpoint pollutants originated. Measures targeting these areas were instrumental in lowering phosphorus concentration. Further improvements in water quality can be made based on a better understanding of the flow processes and water table fluctuations in the VSA. For that reason, we instrumented a self-contained upland variable source watershed with a landscape characteristic of a soil underlain by glacial till at shallow depth similar to the Catskill watersheds. In this presentation, we will discuss our experimental findings and present a mathematical model. Variable source areas have a small slope making gravity the driving force for the flow, greatly simplifying the simulation of the flow processes. The experimental data and the model simulations agreed for both outflow and water table fluctuations. We found that while the flows to the outlet were similar throughout the year, the discharge of the VSA varies greatly. This was due to transpiration by the plants which became active when soil temperatures were above 10oC. We found that shortly after the temperature increased above 10oC the baseflow stopped and only surface runoff occurred when rainstorms exceeded the storage capacity of the soil in at least a portion of the variable source area. Since plant growth in the variable source area was a major variable determining the base flow behavior, changes in temperature in the future - affecting the duration of the growing season - will affect baseflow and related transport of nutrient and other chemicals many times more than small temperature related increases in potential evaporation rate. This in turn will directly change the water availability and pollutant transport in the many surface source watersheds with variable source area hydrology.

  7. A guide to differences between stochastic point-source and stochastic finite-fault simulations

    USGS Publications Warehouse

    Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.

    2009-01-01

    Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.

  8. X-ray Point Source Populations in Spiral and Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.

    2002-01-01

    The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.

  9. Vector image method for the derivation of elastostatic solutions for point sources in a plane layered medium. Part 1: Derivation and simple examples

    NASA Technical Reports Server (NTRS)

    Fares, Nabil; Li, Victor C.

    1986-01-01

    An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.

  10. 40 CFR 414.100 - Applicability; description of the subcategory of direct discharge point sources that do not use...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... subcategory of direct discharge point sources that do not use end-of-pipe biological treatment. 414.100... AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Do Not Use End-of-Pipe Biological Treatment § 414.100 Applicability; description of the subcategory of...

  11. Automatic Classification of Time-variable X-Ray Sources

    NASA Astrophysics Data System (ADS)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara; Gaensler, B. M.

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, and other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ~97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7-500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.

  12. Better Assessment Science Integrating Point and Non-point Sources (BASINS)

    EPA Pesticide Factsheets

    Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.

  13. Statistical Analysis of Tsunami Variability

    NASA Astrophysics Data System (ADS)

    Zolezzi, Francesca; Del Giudice, Tania; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.

    2010-05-01

    The purpose of this paper was to investigate statistical variability of seismically generated tsunami impact. The specific goal of the work was to evaluate the variability in tsunami wave run-up due to uncertainty in fault rupture parameters (source effects) and to the effects of local bathymetry at an individual location (site effects). This knowledge is critical to development of methodologies for probabilistic tsunami hazard assessment. Two types of variability were considered: • Inter-event; • Intra-event. Generally, inter-event variability refers to the differences of tsunami run-up at a given location for a number of different earthquake events. The focus of the current study was to evaluate the variability of tsunami run-up at a given point for a given magnitude earthquake. In this case, the variability is expected to arise from lack of knowledge regarding the specific details of the fault rupture "source" parameters. As sufficient field observations are not available to resolve this question, numerical modelling was used to generate run-up data. A scenario magnitude 8 earthquake in the Hellenic Arc was modelled. This is similar to the event thought to have caused the infamous 1303 tsunami. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7° E and 33.8° E. Specific source parameters (e.g. fault rupture length and displacement) were varied, and the effects on wave height were determined. A Monte Carlo approach considering the statistical distribution of the underlying parameters was used to evaluate the variability in wave height at locations along the coast. The results were evaluated in terms of the coefficient of variation of the simulated wave run-up (standard deviation divided by mean value) for each location. The coefficient of variation along the coast was between 0.14 and 3.11, with an average value of 0.67. The variation was higher in areas of irregular coast. This level of variability is similar to that seen in ground motion attenuation correlations used for seismic hazard assessment. The second issue was intra-event variability. This refers to the differences in tsunami wave run-up along a section of coast during a single event. Intra-event variability investigated directly considering field observations. The tsunami events used in the statistical evaluation were selected on the basis of the completeness and reliability of the available data. Tsunami considered for the analysis included the recent and well surveyed tsunami of Boxing Day 2004 (Great Indian Ocean Tsunami), Java 2006, Okushiri 1993, Kocaeli 1999, Messina 1908 and a case study of several historic events in Hawaii. Basic statistical analysis was performed on the field observations from these tsunamis. For events with very wide survey regions, the run-up heights have been grouped in order to maintain a homogeneous distance from the source. Where more than one survey was available for a given event, the original datasets were maintained separately to avoid combination of non-homogeneous data. The observed run-up measurements were used to evaluate the minimum, maximum, average, standard deviation and coefficient of variation for each data set. The minimum coefficient of variation was 0.12 measured for the 2004 Boxing Day tsunami at Nias Island (7 data) while the maximum is 0.98 for the Okushiri 1993 event (93 data). The average coefficient of variation is of the order of 0.45.

  14. Magnitudes, nature, and effects of point and nonpoint discharges in the Chattahoochee River basin, Atlanta to West Point Dam, Georgia

    USGS Publications Warehouse

    Stamer, J.K.; Cherry, R.N.; Faye, R.E.; Kleckner, R.L.

    1978-01-01

    On an average annual basis and during the storm period of March 12-15, 1976, nonpoint-source loads for most constituents were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 miles downstream from Atlanta, GA. Most of the nonpoint-source constituent loads in the Atlanta to Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads and about 70 percent of the dissolved phosphorus loads at Whitesburg. During a low-flow period, June 1-2, 1977, five municipal point-sources contributed 63 percent of the ultimate biochemical oxygen demand, and 97 percent of the ammonium nitrogen loads at the Franklin station, at the upstream end of West Point Lake. Dissolved-oxygen concentrations of 4.1 to 5.0 milligrams per liter occurred in a 22-mile reach of the river downstream from Atlanta due about equally to nitrogenous and carbonaceous oxygen demands. The heat load from two thermoelectric powerplants caused a decrease in dissolved-oxygen concentration of about 0.2 milligrams per liter. Phytoplankton concentrations in West Point Lake, about 70 miles downstream from Atlanta, could exceed three million cells per millimeter during extended low-flow periods in the summer with present point-source phosphorus loads. (Woodard-USGS)

  15. Water Quality Interaction with Alkaline Phosphatase in the Ganga River: Implications for River Health.

    PubMed

    Yadav, Amita; Pandey, Jitendra

    2017-07-01

    Carbon, nitrogen and phosphorus inputs through atmospheric deposition, surface runoff and point sources were measured in the Ganga River along a gradient of increasing human pressure. Productivity variables (chlorophyll a, gross primary productivity, biogenic silica and autotrophic index) and heterotrophy (respiration, substrate induced respiration, biological oxygen demand and fluorescein diacetate hydrolysis) showed positive relationships with these inputs. Alkaline phosphatase (AP), however, showed an opposite trend. Because AP is negatively influenced by available P, and eutrophy generates a feedback on P fertilization, the study implies that the alkaline phosphatase can be used as a high quality criterion for assessing river health.

  16. An inverter/controller subsystem optimized for photovoltaic applications

    NASA Technical Reports Server (NTRS)

    Pickrell, R. L.; Osullivan, G.; Merrill, W. C.

    1978-01-01

    Conversion of solar array dc power to ac power stimulated the specification, design, and simulation testing of an inverter/controller subsystem tailored to the photovoltaic power source characteristics. Optimization of the inverter/controller design is discussed as part of an overall photovoltaic power system designed for maximum energy extraction from the solar array. The special design requirements for the inverter/ controller include: a power system controller (PSC) to control continuously the solar array operating point at the maximum power level based on variable solar insolation and cell temperatures; and an inverter designed for high efficiency at rated load and low losses at light loadings to conserve energy.

  17. Unidentified point sources in the IRAS minisurvey

    NASA Technical Reports Server (NTRS)

    Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.

    1984-01-01

    Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.

  18. Modeling deep brain stimulation: point source approximation versus realistic representation of the electrode

    NASA Astrophysics Data System (ADS)

    Zhang, Tianhe C.; Grill, Warren M.

    2010-12-01

    Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.

  19. Perspectives on Ultraluminous X-ray sources after the discovery of Ultraluminous Pulsars

    NASA Astrophysics Data System (ADS)

    Zampieri, L.; Ambrosi, E.; Fiore, A.; Pintore, F.; Turolla, R.; Israel, GL.; Stella, L.; Casella, P.; Papitto, A.; Rodriguez Castillo, G. A.; De Luca, A.; Tiengo, A.; Belfiore, A.; Esposito, P.; Marelli, M.; Novara, G.; Salvaterra, R.; Salvetti, D.; Mereghetti, S.; Wolter, A.

    2017-10-01

    Ultraluminous X-ray sources (ULXs) are observationally defined as non-nuclear extragalactic X-ray point sources with inferred (isotropic) luminosity exceeding the Eddington limit for a ˜ 10 M_{⊙} compact object. While in the past few years a certain evidence (and a general consensus) has been gathered in favour of the existence of black hole (BH) remnants in ULXs, the recent discovery of three Ultraluminous X-ray Pulsars has unexpectedly revealed what is likely to be a significant population of neutron star (NS) ULXs. These findings challenge more than ever our present understanding of these sources, their accretion mechanism/history, and their formation pathways. After reviewing some of these intriguing observational facts, we will summarize some perspective studies that we are carrying out to model the multiwavelength variability and broadband spectra of ULXs, including the contribution of an accretion column for NS systems. We derive the luminosity emitted by the latter assuming that a multipolar component dominates the magnetic field close to the NS. The focus is on comparing the simulated multiwavelength emission properties of stellar-mass/massive BHs to those of NS systems, and on confronting the model predictions with the available observations of Pulsar ULXs.

  20. Monitoring the variability of active galactic nuclei from a space-based platform

    NASA Technical Reports Server (NTRS)

    Peterson, Bradley M.; Atwood, Bruce; Byard, Paul L.

    1994-01-01

    Detailed monitoring of AGN's with FRESIP can provide well-sampled light curves for a large number of AGN's. Such data are completely unprecedented in this field, and will provide powerful new constraints on the origin of the UV/optical continuum in AGN's. The FRESIP baseline design will allow 1 percent photometry on sources brighter than V approximately equals 19.6 mag, and we estimate that over 300 sources can be studied. We point out that digitization effects will have a significant negative impact on the faint limit and the number of detectable sources will decrease dramatically if a fixed gain setting (estimated to be nominally 25 e(-) per ADU) is used for all read-outs. We note that the primary limitation to studying AGN's is background (sky and read-out noise) rather than source/background contrast with a focused telescope and by longer integrations. While we believe that it may be possible to achieve the AGN-monitoring science goals with a more compact and much less expensive telescope, the proposed FRESIP satellite affords an excellent opportunity to attain the required data at essentially zero cost as a secondary goal of a more complex mission.

  1. Multiband super-resolution imaging of graded-index photonic crystal flat lens

    NASA Astrophysics Data System (ADS)

    Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun

    2018-05-01

    Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.

  2. Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen

    2016-06-01

    We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.

  3. Very Luminous X-ray Point Sources in Starburst Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.

    Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.

  4. Observed ground-motion variabilities and implication for source properties

    NASA Astrophysics Data System (ADS)

    Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.

    2016-12-01

    One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.

  5. Uncertainty in hydrological signatures

    NASA Astrophysics Data System (ADS)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty magnitude and bias, and to test how uncertainty depended on the density of the raingauge network and flow gauging station characteristics. The uncertainties were sometimes large (i.e. typical intervals of ±10-40% relative uncertainty) and highly variable between signatures. Uncertainty in the mean discharge was around ±10% for both catchments, while signatures describing the flow variability had much higher uncertainties in the Mahurangi where there was a fast rainfall-runoff response and greater high-flow rating uncertainty. Event and total runoff ratios had uncertainties from ±10% to ±15% depending on the number of rain gauges used; precipitation uncertainty was related to interpolation rather than point uncertainty. Uncertainty distributions in these signatures were skewed, and meant that differences in signature values between these catchments were often not significant. We hope that this study encourages others to use signatures in a way that is robust to data uncertainty.

  6. Combined model of intrinsic and extrinsic variability for computational network design with application to synthetic biology.

    PubMed

    Toni, Tina; Tidor, Bruce

    2013-01-01

    Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA--for example, on the same transcript--was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology.

  7. Combined Model of Intrinsic and Extrinsic Variability for Computational Network Design with Application to Synthetic Biology

    PubMed Central

    Toni, Tina; Tidor, Bruce

    2013-01-01

    Biological systems are inherently variable, with their dynamics influenced by intrinsic and extrinsic sources. These systems are often only partially characterized, with large uncertainties about specific sources of extrinsic variability and biochemical properties. Moreover, it is not yet well understood how different sources of variability combine and affect biological systems in concert. To successfully design biomedical therapies or synthetic circuits with robust performance, it is crucial to account for uncertainty and effects of variability. Here we introduce an efficient modeling and simulation framework to study systems that are simultaneously subject to multiple sources of variability, and apply it to make design decisions on small genetic networks that play a role of basic design elements of synthetic circuits. Specifically, the framework was used to explore the effect of transcriptional and post-transcriptional autoregulation on fluctuations in protein expression in simple genetic networks. We found that autoregulation could either suppress or increase the output variability, depending on specific noise sources and network parameters. We showed that transcriptional autoregulation was more successful than post-transcriptional in suppressing variability across a wide range of intrinsic and extrinsic magnitudes and sources. We derived the following design principles to guide the design of circuits that best suppress variability: (i) high protein cooperativity and low miRNA cooperativity, (ii) imperfect complementarity between miRNA and mRNA was preferred to perfect complementarity, and (iii) correlated expression of mRNA and miRNA – for example, on the same transcript – was best for suppression of protein variability. Results further showed that correlations in kinetic parameters between cells affected the ability to suppress variability, and that variability in transient states did not necessarily follow the same principles as variability in the steady state. Our model and findings provide a general framework to guide design principles in synthetic biology. PMID:23555205

  8. The resolution of point sources of light as analyzed by quantum detection theory

    NASA Technical Reports Server (NTRS)

    Helstrom, C. W.

    1972-01-01

    The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.

  9. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  10. A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Ke; Miller, M. Coleman

    The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less

  11. Effects of pointing compared with naming and observing during encoding on item and source memory in young and older adults.

    PubMed

    Ouwehand, Kim; van Gog, Tamara; Paas, Fred

    2016-10-01

    Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.

  12. X-ray Binaries in the Central Region of M31

    NASA Astrophysics Data System (ADS)

    Trudolyubov, Sergey P.; Priedhorsky, W. C.; Cordova, F. A.

    2006-09-01

    We present the results of the systematic survey of X-ray sources in the central region of M31 using the data of XMM-Newton observations. The spectral properties and variability of 124 bright X-ray sources were studied in detail. We found that more than 80% of sources observed in two or more observations show significant variability on the time scales of days to years. At least 50% of the sources in our sample are spectrally variable. The fraction of variable sources in our survey is much higher than previously reported from Chandra survey of M31, and is remarkably close to the fraction of variable sources found in M31 globular cluster X-ray source population. We present spectral distribution of M31 X-ray sources, based on the spectral fitting with a power law model. The distribution of spectral photon index has two main peaks at 1.8 and 2.3, and shows clear evolution with source luminosity. Based on the similarity of the properties of M31 X-ray sources and their Galactic counterparts, we expect most of X-ray sources in our sample to be accreting binary systems with neutron star and black hole primaries. Combining the results of X-ray analysis (X-ray spectra, hardness-luminosity diagrams and variability) with available data at other wavelengths, we explore the possibility of distinguishing between bright neutron star and black hole binary systems, and identify 7% and 25% of sources in our sample as a probable black hole and neutron star candidates. Finally, we compare the M31 X-ray source population to the source populations of normal galaxies of different morphological type. Support for this work was provided through NASA Grant NAG5-12390. Part of this work was done during a summer workshop ``Revealing Black Holes'' at the Aspen Center for Physics, S. T. is grateful to the Center for their hospitality.

  13. Seasonal and Spatial Variability of Anthropogenic and Natural Factors Influencing Groundwater Quality Based on Source Apportionment

    PubMed Central

    Guo, Xueru; Zuo, Rui; Meng, Li; Wang, Jinsheng; Teng, Yanguo; Liu, Xin; Chen, Minhua

    2018-01-01

    Globally, groundwater resources are being deteriorated by rapid social development. Thus, there is an urgent need to assess the combined impacts of natural and enhanced anthropogenic sources on groundwater chemistry. The aim of this study was to identify seasonal characteristics and spatial variations in anthropogenic and natural effects, to improve the understanding of major hydrogeochemical processes based on source apportionment. 34 groundwater points located in a riverside groundwater resource area in northeast China were sampled during the wet and dry seasons in 2015. Using principal component analysis and factor analysis, 4 principal components (PCs) were extracted from 16 groundwater parameters. Three of the PCs were water-rock interaction (PC1), geogenic Fe and Mn (PC2), and agricultural pollution (PC3). A remarkable difference (PC4) was organic pollution originating from negative anthropogenic effects during the wet season, and geogenic F enrichment during the dry season. Groundwater exploitation resulted in dramatic depression cone with higher hydraulic gradient around the water source area. It not only intensified dissolution of calcite, dolomite, gypsum, Fe, Mn and fluorine minerals, but also induced more surface water recharge for the water source area. The spatial distribution of the PCs also suggested the center of the study area was extremely vulnerable to contamination by Fe, Mn, COD, and F−. PMID:29415516

  14. Processing Uav and LIDAR Point Clouds in Grass GIS

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.

    2016-06-01

    Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.

  15. Polychlorinated Biphenyl (PCB) Bioaccumulation in Fish: A Look at Michigan's Upper Peninsula

    NASA Astrophysics Data System (ADS)

    Sokol, E. C.; Urban, N. R.; Perlinger, J. A.; Khan, T.; Friedman, C. L.

    2014-12-01

    Fish consumption is an important economic, social and cultural component of Michigan's UpperPeninsula, where safe fish consumption is threatened due to polychlorinated biphenyl (PCB)contamination. Despite its predominantly rural nature, the Upper Peninsula has a history of industrialPCB use. PCB congener concentrations in fish vary 50-fold in Upper Peninsula lakes. Several factors maycontribute to this high variability including local point sources, unique watershed and lakecharacteristics, and food web structure. It was hypothesized that the variability in congener distributionscould be used to identify factors controlling concentrations in fish, and then to use those factors topredict PCB contamination in fish from lakes that had not been monitored. Watershed and lakecharacteristics were acquired from several databases for 16 lakes sampled in the State's fishcontaminant survey. Species congener distributions were compared using Principal Component Analysis(PCA) to distinguish between lakes with local vs. regional, atmospheric sources; six lakes were predictedto have local sources and half of those have confirmed local PCB use. For lakes without local PCBsources, PCA indicated that lake size was the primary factor influencing PCB concentrations. The EPA'sbioaccumulation model, BASS, was used to predict PCB contamination in lakes without local sources as afunction of food web characteristics. The model was used to evaluate the hypothesis that deep,oligotrophic lakes have longer food webs and higher PCB concentrations in top predator fish. Based onthese findings, we will develop a mechanistic watershed-lake model to predict PCB concentrations infish as a function of atmospheric PCB concentrations, lake size, and trophic state. Future atmosphericconcentrations, predicted by modeling potential primary and secondary emission scenarios, will be usedto predict the time horizon for safe fish consumption.

  16. Predicting Phosphorus Dynamics Across Physiographic Regions Using a Mixed Hortonian Non-Hortonian Hydrology Model

    NASA Astrophysics Data System (ADS)

    Collick, A.; Easton, Z. M.; Auerbach, D.; Buchanan, B.; Kleinman, P. J. A.; Fuka, D.

    2017-12-01

    Predicting phosphorus (P) loss from agricultural watersheds depends on accurate representation of the hydrological and chemical processes governing P mobility and transport. In complex landscapes, P predictions are complicated by a broad range of soils with and without restrictive layers, a wide variety of agricultural management, and variable hydrological drivers. The Soil and Water Assessment Tool (SWAT) is a watershed model commonly used to predict runoff and non-point source pollution transport, but is commonly only used with Hortonian (traditional SWAT) or non-Hortonian (SWAT-VSA) initializations. Many shallow soils underlain by a restricting layer commonly generate saturation excess runoff from variable source areas (VSA), which is well represented in a re-conceptualized version, SWAT-VSA. However, many watersheds exhibit traits of both infiltration excess and saturation excess hydrology internally, based on the hydrologic distance from the stream, distribution of soils across the landscape, and characteristics of restricting layers. The objective of this research is to provide an initial look at integrating distributed predictive capabilities that consider both Hortonian and Non-Hortonian solutions simultaneously within a single SWAT-VSA initialization. We compare results from all three conceptual watershed initializations against measured surface runoff and stream P loads and to highlight the model's ability to drive sub-field management of P. All three initializations predict discharge similarly well (daily Nash-Sutcliffe Efficiencies above 0.5), but the new conceptual SWAT-VSA initialization performed best in predicting P export from the watershed, while also identifying critical source areas - those areas generating large runoff and P losses at the sub field level. These results support the use of mixed Hortonian non-Hortonian SWAT-VSA initializations in predicting watershed-scale P losses and identifying critical source areas of P loss in landscapes with VSA hydrology.

  17. Airborne pollutant concentrations and health risks in selected Apulia region (IT) areas: preliminary results from the Jonico-Salentino project

    NASA Astrophysics Data System (ADS)

    Buccolieri, Riccardo; Genga, Alessandra; De Donno, Antonella; Siciliano, Tiziana; Siciliano, Maria; Serio, Francesca; Grassi, Tiziana; Rispoli, Gennaro; Cavaiola, Mattia; Lionello, Piero

    2017-04-01

    The Jonico-Salentino project (PJS) is a multidisciplinary study funded by Apulia Region (Det. N. 188_RU - 10/11/2015) aiming to assess health risk of people living in the cities of Lecce, Brindisi and Taranto. Citizens are exposed to emissions from industrial sources, biomass burning, vehicular, naval and air traffic, as well as from natural radioactive sources (radon). In this context, this work presents some preliminary results obtained by the Unit of University of Salento (Lecce) during an experimental campaign carried out in the study areas. The campaign is devoted to (i) sample particulate matter (PM), (ii) measure micro-meteorological variables and (iii) evaluate exposure levels of residents to main pollutants. Specifically, PM is sampled using a low volume sampler, while meteorological variables (wind speed components and direction temperature, relative humidity, precipitation and global solar radiation) are measured by advanced instrumentation such as ultrasonic anemometers which allows for the estimation of turbulence fluxes. The early effects of exposure to air pollutants is evaluated by the frequency of micronucleus (a biomarker of DNA damage) in exfoliated buccal cells collected using a soft-bristled toothbrush from oral mucosa of primary school children enrolled in the study. PM concentration data collected during the campaign are characterised from a chemical and morphological point of view; the analysis of different groups of particles allows identifying different natural and anthropogenic emission sources. This is done in conjunction to the investigation of the influence of local meteorology to elucidate the contribution of specific types of sources on final concentration levels. Finally, all data are used to assess the health risk of people living in the study areas as consequence of exposure to airborne pollutants.

  18. Multiwavelength search and studies of active galaxies and quasars

    NASA Astrophysics Data System (ADS)

    Mickaelian, Areg M.

    2017-12-01

    The Byurakan Astrophysical Observatory (BAO) has always been one of the centres for surveys and studies of active galaxies. Here we review our search and studies of active galaxies during last 30 years using various wavelength ranges, as well as some recent related works. These projects since late 1980s were focused on multiwavelength search and studies of AGN and Starbursts (SB). 1103 blue stellar objects (BSOs) on the basis of their UV-excess were selected using Markarian Survey (First Byurakan Survey, FBS) plates and Markarian's criteria used for the galaxies. Among many blue stars, QSOs and Seyfert galaxies were found by follow-up observations. 1577 IRAS point sources were optically identified using FBS low-dispersion spectra and many AGN, SB and high-luminosity IR galaxies (LIRG/ULIRG) were discovered. 32 extremely high IR/opt flux ratio galaxies were studies with Spitzer. 2791 ROSAT FSC sources were optically identified using Hamburg Quasar Survey (HQS) low-dispersion spectra and many AGN were discovered by follow-up observations. Fine analysis of emission line spectra was carried out using spectral line decomposition software to establish true profiles and calculate physical parameters for the emitting regions, as well as to study the spectral variability of these objects. X-ray and radio selection criteria were used to find new AGN and variable objects for further studies. We have estimated AGN content of X-ray sources as 52.9%. We have also combined IRAS PSC and FSC catalogs and compiled its extragalactic sample, which allowed us to estimate AGN content among IR sources as 23.7%. Multiwavelength approach allowed revealing many new AGN and SB and obtaining a number of interesting relations using their observational characteristics and physical properties.

  19. Incentive Analysis for Clean Water Act Reauthorization: Point Source/Nonpoint Source Trading for Nutrient Discharge Reductions (1992)

    EPA Pesticide Factsheets

    Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.

  20. Microbial Source Module (MSM): Documenting the Science and Software for Discovery, Evaluation, and Integration

    EPA Science Inventory

    The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...

  1. A Chandra Observation of the Ultraluminous Infrared Galaxy IRAS 19254-7245 (THE SUPERANTENNAE): X-Ray Emission From the Compton-Thick Active Galactic Nucleus and the Diffuse Starburst

    NASA Technical Reports Server (NTRS)

    Jia, Jianjun; Ptak, Andrew Francis; Heckman, Timothy M.; Braito, Valantina; Reeves, James

    2012-01-01

    We present a Chandra observation of IRAS 19254-7245, a nearby ultraluminous infrared galaxy also known as the Superantennae. The high spatial resolution of Chandra allows us to disentangle for the first time the diffuse starburst (SB) emission from the embedded Compton-thick active galactic nucleus (AGN) in the southern nucleus. No AGN activity is detected in the northern nucleus. The 2-10 keV spectrum of the AGN emission is fitted by a flat power law (G = 1.3) and an He-like Fe Ka line with equivalent width 1.5 keV, consistent with previous observations. The Fe Ka line profile could be resolved as a blend of a neutral 6.4 keV line and an ionized 6.7 keV (He-like) or 6.9 keV (H-like) line. Variability of the neutral line is detected compared with the previous XMM-Newton and Suzaku observations, demonstrating the compact size of the iron line emission. The spectrum of the galaxy-scale extended emission excluding the AGN and other bright point sources is fitted with a thermal component with a best-fit kT of 0.8 keV. The 2-10 keV luminosity of the extended emission is about one order of magnitude lower than that of the AGN. The basic physical and structural properties of the extended emission are fully consistent with a galactic wind being driven by the SB. A candidate ultraluminous X-ray source is detected 8 south of the southern nucleus. The 0.3-10 keV luminosity of this off-nuclear point source is 6 × 1040 erg s-1 if the emission is isotropic and the source is associated with the Superantennae.

  2. Improving source identification of Atlanta aerosol using temperature resolved carbon fractions in positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Kim, Eugene; Hopke, Philip K.; Edgerton, Eric S.

    Daily integrated PM 2.5 (particulate matter ⩽2.5 μm in aerodynamic diameter) composition data including eight individual carbon fractions collected at the Jefferson Street monitoring site in Atlanta were analyzed with positive matrix factorization (PMF). Particulate carbon was analyzed using the thermal optical reflectance method that divides carbon into four organic carbon (OC), pyrolized organic carbon (OP), and three elemental carbon (EC) fractions. A total of 529 samples and 28 variables were measured between August 1998 and August 2000. PMF identified 11 sources in this study: sulfate-rich secondary aerosol I (50%), on-road diesel emissions (11%), nitrate-rich secondary aerosol (9%), wood smoke (7%), gasoline vehicle (6%), sulfate-rich secondary aerosol II (6%), metal processing (3%), airborne soil (3%), railroad traffic (3%), cement kiln/carbon-rich (2%), and bus maintenance facility/highway traffic (2%). Differences from previous studies using only the traditional OC and EC data (J. Air Waste Manag. Assoc. 53(2003a)731; Atmos Environ. (2003b)) include four traffic-related combustion sources (gasoline vehicle, on-road diesel, railroad, and bus maintenance facility) containing carbon fractions whose abundances were different between the various sources. This study indicates that the temperature resolved fractional carbon data can be utilized to enhance source apportionment study, especially with respect to the separation of diesel emissions from gasoline vehicle sources. Conditional probability functions using surface wind data and identified source contributions aid the identifications of local point sources.

  3. 40 CFR 414.90 - Applicability; description of the subcategory of direct discharge point sources that use end-of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...

  4. 40 CFR Table 4 to Part 455 - BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological Treatment 4 Table 4... Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological...

  5. Multi-rate, real time image compression for images dominated by point sources

    NASA Technical Reports Server (NTRS)

    Huber, A. Kris; Budge, Scott E.; Harris, Richard W.

    1993-01-01

    An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.

  6. Parameter uncertainty and variability in evaluative fate and exposure models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertwich, E.G.; McKone, T.E.; Pease, W.S.

    The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows one to focus resources if the uncertainty is to be reduced. The potentialmore » does of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.« less

  7. Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts

    PubMed Central

    PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS

    2017-01-01

    Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876

  8. State Firearm Laws and Interstate Transfer of Guns in the USA, 2006-2016.

    PubMed

    Collins, Tessa; Greenberg, Rachael; Siegel, Michael; Xuan, Ziming; Rothman, Emily F; Cronin, Shea W; Hemenway, David

    2018-06-01

    In a cross-sectional, panel study, we examined the relationship between state firearm laws and the extent of interstate transfer of guns, as measured by the percentage of crime guns recovered in a state and traced to an in-state source (as opposed to guns recovered in a state and traced to an out-of-state source). We used 2006-2016 data on state firearm laws obtained from a search of selected state statutes and 2006-2016 crime gun trace data from the Bureau of Alcohol, Tobacco, Firearms, and Explosives. We examined the relationship between state firearm laws and interstate transfer of guns using annual data from all 50 states during the period 2006-2016 and employing a two-way fixed effects model. The primary outcome variable was the percentage of crime guns recovered in a state that could be traced to an original point of purchase within that state as opposed to another state. The main exposure variables were eight specific state firearm laws pertaining to dealer licensing, sales restrictions, background checks, registration, prohibitors for firearm purchase, and straw purchase of guns. Four laws were independently associated with a significantly lower percentage of in-state guns: a waiting period for handgun purchase, permits required for firearm purchase, prohibition of firearm possession by people convicted of a violent misdemeanor, and a requirement for relinquishment of firearms when a person becomes disqualified from owning them. States with a higher number of gun laws had a lower percentage of traced guns to in-state dealers, with each increase of one in the total number of laws associated with a decrease of 1.6 percentage points in the proportion of recovered guns that were traced to an in-state as opposed to an out-of-state source. Based on an examination of the movement patterns of guns across states, the overall observed pattern of gun flow was out of states with weak gun laws and into states with strong gun laws. These findings indicate that certain state firearm laws are associated with a lower percentage of recovered crime guns being traced to an in-state source, suggesting reduced access to guns in states with those laws.

  9. [Spatial heterogeneity and classified control of agricultural non-point source pollution in Huaihe River Basin].

    PubMed

    Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua

    2013-02-01

    Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.

  10. Identification of Active Galactic Nuclei through HST optical variability in the GOODS South field

    NASA Astrophysics Data System (ADS)

    Pouliasis, Ektoras; Georgantopoulos; Bonanos, A.; HCV Team

    2016-08-01

    This work aims to identify AGN in the GOODS South deep field through optical variability. This method can easily identify low-luminosity AGN. In particular, we use images in the z-band obtained from the Hubble Space Telescope with the ACS/WFC camera over 5 epochs separated by ~45 days. Aperture photometry has been performed using SExtractor to extract the lightcurves. Several variability indices, such as the median absolute deviation, excess variance, and sigma were applied to automatically identify the variable sources. After removing artifacts, stars and supernovae from the variable selected sample and keeping only those sources with known photometric or spectroscopic redshift, the optical variability was compared to variability in other wavelengths (X-rays, mid-IR, radio). This multi-wavelength study provides important constraints on the structure and the properties of the AGN and their relation to their hosts. This work is a part of the validation of the Hubble Catalog of Variables (HCV) project, which has been launched at the National Observatory of Athens by ESA, and aims to identify all sources (pointlike and extended) showing variability, based on the Hubble Source Catalog (HSC, Whitmore et al. 2015). The HSC version 1 was released in February 2015 and includes 80 million sources imaged with the WFPC2, ACS/WFC, WFC3/UVIS and WFC3/IR cameras.

  11. Gamma-ray Monitoring of Active Galactic Nuclei with HAWC

    NASA Astrophysics Data System (ADS)

    Lauer, Robert; HAWC Collaboration

    2016-03-01

    Active Galactic Nuclei (AGN) are extra-galactic sources that can exhibit extreme flux variability over a wide range of wavelengths. TeV gamma rays have been observed from about 60 AGN and can help to diagnose emission models and to study cosmic features like extra-galactic background light or inter-galactic magnetic fields. The High Altitude Water Cherenkov (HAWC) observatory is a new extensive air shower array that can complement the pointed TeV observations of imaging air Cherenkov telescopes. HAWC is optimized for studying gamma rays with energies between 100 GeV and 100 TeV and has an instantaneous field of view of ~2 sr and a duty cycle >95% that allow us to scan 2/3 of the sky every day. By performing an unbiased monitoring of TeV emissions of AGN over most of the northern and part of the southern sky, HAWC can provide crucial information and trigger follow-up observations in collaborations with pointed TeV instruments. Furthermore, HAWC coverage of AGN is complementary to that provided by the Fermi satellite at lower energies. In this contribution, we will present HAWC flux light curves of TeV gamma rays from various sources, notably the bright AGN Markarian 421 and Markarian 501, and highlight recent results from multi-wavelengths and multi-instrument studies.

  12. Botulinum Toxin for the Treatment of Myofascial Pain Syndromes Involving the Neck and Back: A Review from a Clinical Perspective

    PubMed Central

    Climent, José M.; Fenollosa, Pedro; Martin-del-Rosario, Francisco

    2013-01-01

    Introduction. Botulinum toxin inhibits acetylcholine (ACh) release and probably blocks some nociceptive neurotransmitters. It has been suggested that the development of myofascial trigger points (MTrP) is related to an excess release of ACh to increase the number of sensitized nociceptors. Although the use of botulinum toxin to treat myofascial pain syndrome (MPS) has been investigated in many clinical trials, the results are contradictory. The objective of this paper is to identify sources of variability that could explain these differences in the results. Material and Methods. We performed a content analysis of the clinical trials and systematic reviews of MPS. Results and Discussion. Sources of differences in studies were found in the diagnostic and selection criteria, the muscles injected, the injection technique, the number of trigger points injected, the dosage of botulinum toxin used, treatments for control group, outcome measures, and duration of followup. The contradictory results regarding the efficacy of botulinum toxin A in MPS associated with neck and back pain do not allow this treatment to be recommended or rejected. There is evidence that botulinum toxin could be useful in specific myofascial regions such as piriformis syndrome. It could also be useful in patients with refractory MPS that has not responded to other myofascial injection therapies. PMID:23533477

  13. Natural selection and self-organization in complex adaptive systems.

    PubMed

    Di Bernardo, Mirko

    2010-01-01

    The central theme of this work is self-organization "interpreted" both from the point of view of theoretical biology, and from a philosophical point of view. By analysing, on the one hand, those which are now considered--not only in the field of physics--some of the most important discoveries, that is complex systems and deterministic chaos and, on the other hand, the new frontiers of systemic biology, this work highlights how large thermodynamic systems which are open can spontaneously stay in an orderly regime. Such systems can represent the natural source of the order required for a stable self-organization, for homoeostasis and for hereditary variations. The order, emerging in enormous randomly interconnected nets of binary variables, is almost certainly only the precursor of similar orders emerging in all the varieties of complex systems. Hence, this work, by finding new foundations for the order pervading the living world, advances the daring hypothesis according to which Darwinian natural selection is not the only source of order in the biosphere. Thus, the article, by examining the passage from Prigogine's dissipative structures theory to the contemporary theory of biological complexity, highlights the development of a coherent and continuous line of research which is set to individuate the general principles marking the profound reality of that mysterious self-organization characterizing the complexity of life.

  14. 3DFEMWATER: A three-dimensional finite element model of water flow through saturated-unsaturated media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, G.T.

    1987-08-01

    The 3DFEMWATER model is designed to treat heterogeneous and anisotropic media consisting of as many geologic formations as desired, consider both distributed and point sources/sinks that are spatially and temporally dependent, accept the prescribed initial conditions or obtain them by simulating a steady state version of the system under consideration, deal with a transient head distributed over the Dirichlet boundary, handle time-dependent fluxes due to pressure gradient varying along the Neumann boundary, treat time-dependent total fluxes distributed over the Cauchy boundary, automatically determine variable boundary conditions of evaporation, infiltration, or seepage on the soil-air interface, include the off-diagonal hydraulic conductivitymore » components in the modified Richards equation for dealing with cases when the coordinate system does not coincide with the principal directions of the hydraulic conductivity tensor, give three options for estimating the nonlinear matrix, include two options (successive subregion block iterations and successive point interactions) for solving the linearized matrix equations, automatically reset time step size when boundary conditions or source/sinks change abruptly, and check the mass balance computation over the entire region for every time step. The model is verified with analytical solutions or other numerical models for three examples.« less

  15. A method to analyze "source-sink" structure of non-point source pollution based on remote sensing technology.

    PubMed

    Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui

    2013-11-01

    With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. An empirical model to predict road dust emissions based on pavement and traffic characteristics.

    PubMed

    Padoan, Elio; Ajmone-Marsan, Franco; Querol, Xavier; Amato, Fulvio

    2018-06-01

    The relative impact of non-exhaust sources (i.e. road dust, tire wear, road wear and brake wear particles) on urban air quality is increasing. Among them, road dust resuspension has generally the highest impact on PM concentrations but its spatio-temporal variability has been rarely studied and modeled. Some recent studies attempted to observe and describe the time-variability but, as it is driven by traffic and meteorology, uncertainty remains on the seasonality of emissions. The knowledge gap on spatial variability is much wider, as several factors have been pointed out as responsible for road dust build-up: pavement characteristics, traffic intensity and speed, fleet composition, proximity to traffic lights, but also the presence of external sources. However, no parameterization is available as a function of these variables. We investigated mobile road dust smaller than 10 μm (MF10) in two cities with different climatic and traffic conditions (Barcelona and Turin), to explore MF10 seasonal variability and the relationship between MF10 and site characteristics (pavement macrotexture, traffic intensity and proximity to braking zone). Moreover, we provide the first estimates of emission factors in the Po Valley both in summer and winter conditions. Our results showed a good inverse relationship between MF10 and macro-texture, traffic intensity and distance from the nearest braking zone. We also found a clear seasonal effect of road dust emissions, with higher emission in summer, likely due to the lower pavement moisture. These results allowed building a simple empirical mode, predicting maximal dust loadings and, consequently, emission potential, based on the aforementioned data. This model will need to be scaled for meteorological effect, using methods accounting for weather and pavement moisture. This can significantly improve bottom-up emission inventory for spatial allocation of emissions and air quality management, to select those roads with higher emissions for mitigation measures. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Constraining the Enceladus plume using numerical simulation and Cassini data

    NASA Astrophysics Data System (ADS)

    Yeoh, Seng Keat; Li, Zheng; Goldstein, David B.; Varghese, Philip L.; Levin, Deborah A.; Trafton, Laurence M.

    2017-01-01

    Since its discovery, the Enceladus plume has been subjected to intense study due to the major effects that it has on the Saturnian system and the window that it provides into the interior of Enceladus. However, several questions remain and we attempt to answer some of them in this work. In particular, we aim to constrain the H2O production rate from the plume, evaluate the relative importance of the jets and the distributed sources along the Tiger Stripes, and make inferences about the source of the plume by accurately modeling the plume and constraining the model using the Cassini INMS and UVIS data. This is an extension of a previous work (Yeoh, S.K., et al. [2015] Icarus, 253, 205-222) in which we only modeled the collisional part of the Enceladus plume and studied its important physical processes. In this work, we propagate the plume farther into space where the flow has become free-molecular and the Cassini INMS and UVIS data were sampled. Then, we fit this part of the plume to the INMS H2O density distributions sampled along the E3, E5 and E7 trajectories and also compare some of the fit results with the UVIS measurements of the plume optical depth collected during the solar occultation observation on 18 May 2010. We consider several vent conditions and source configurations for the plume. By constraining our model using the INMS and UVIS data, we estimate H2O production rates of several hundred kgs-1: 400-500 kg/s during the E3 and E7 flybys and ∼900 kg/s during the E5 flyby. These values agree with other estimates and are consistent with the observed temporal variability of the plume over the orbital period of Enceladus (Hedman, M.M., et al. [2013] Nature, 500, 182-184). In addition, we determine that one of the Tiger Stripes, Cairo, exhibits a local temporal variability consistent with the observed overall temporal variability of the plume. We also find that the distributed sources along the Tiger Stripes are likely dominant while the jets provide a lesser contribution. Moreover, our best-fit solutions for the plume are sensitive to the vent conditions chosen. The spreading angle of the jet produced is the main difference among the vent conditions and thus it appears to be an important parameter in fitting to these INMS data sets. In general, we find that narrow jets produce better fits, suggesting high Mach numbers (> 5) at the vents. This is supported by certain narrow features believed to be jets in both the INMS and UVIS data sets. This tends to rule out sublimation from the surface but points to a deep underground source for the plume. However, the underground source can be either sublimation from an icy reservoir or evaporation from a liquid reservoir. A high Mach number at the vent also suggests subsurface channels with large variations in width and not fairly straight channels so that the gas can undergo sufficient expansion. Additionally, the broad spreading angles inferred for the μm-sized grains (Ingersoll, A.P. and Ewald, S.P. [2011] Icarus, 216, 492-506; Postberg, F., et al. [2011] Nature, 474, 620-622) cannot be due to spreading by the gas above the surface alone. Some other mechanism(s) must also be responsible, perhaps occurring below the surface, which further points to an underground source for the plume.

  18. Variable Magnification With Kirkpatrick-Baez Optics for Synchrotron X-Ray Microscopy

    PubMed Central

    Jach, Terrence; Bakulin, Alex S.; Durbin, Stephen M.; Pedulla, Joseph; Macrander, Albert

    2006-01-01

    We describe the distinction between the operation of a short focal length x-ray microscope forming a real image with a laboratory source (convergent illumination) and with a highly collimated intense beam from a synchrotron light source (Köhler illumination). We demonstrate the distinction with a Kirkpatrick-Baez microscope consisting of short focal length multilayer mirrors operating at an energy of 8 keV. In addition to realizing improvements in the resolution of the optics, the synchrotron radiation microscope is not limited to the usual single magnification at a fixed image plane. Higher magnification images are produced by projection in the limit of geometrical optics with a collimated beam. However, in distinction to the common method of placing the sample behind the optical source of a diverging beam, we describe the situation in which the sample is located in the collimated beam before the optical element. The ultimate limits of this magnification result from diffraction by the specimen and are determined by the sample position relative to the focal point of the optic. We present criteria by which the diffraction is minimized. PMID:27274930

  19. Valuation of medical resource units collected in health economic studies.

    PubMed

    Copley-Merriman, C; Lair, T J

    1994-01-01

    This paper reviews the issues that are critical for the valuation of medical resources in the context of health economic studies. There are several points to consider when undertaking the valuation of medical resources. The perspective of the analysis should be established before determining the valuation process. Future costs should be discounted to present values, and time and effort spent in assigning a monetary value to a medical resource should be proportional to its importance in the analysis. Prices vary considerably based on location of the service and the severity of the illness episode. Because of the wide variability in pricing data, sensitivity analysis is an important component of validation of study results. A variety of data sources have been applied to the valuation of medical resources. Several types of data are reviewed in this paper, including claims data, national survey data, administrative data, and marketing research data. Valuation of medical resources collected in clinical trials is complex because of the lack of standardization of the data sources. A national pricing data source for health economic valuation would greatly facilitate study analysis and make comparisons between results more meaningful.

  20. Near-field Mercury Deposition During Summertime Precipitation Events: the Impact of Coal Fired Utilities

    NASA Astrophysics Data System (ADS)

    Christianson, E. M.; Keeler, G. J.; Landis, M. S.

    2008-12-01

    Mercury (Hg) is a bioaccumulative neurotoxin that has been shown to enter water bodies, and consequently the food chain, via atmospheric deposition to the earth's surface. Anthropogenic emissions of the pollutant play a significant role in contributing to the atmospheric pool of Hg, but the near filed impact from point source on surface deposition has been poorly defined to date. An intensive study during July-September 2006 established eight networked precipitation collection sites in northeastern Ohio, U.S.A., located at varying proximities to coal combustion facilities to evaluate the spatial scale of Hg wet deposition concentration enhancement about the sources. It was found that an average of 42% of the Hg wet deposited at sites in the immediate vicinity (<1 km) of coal fired utilities could be attributed to that adjacent source. Several meteorological variables were shown to account for the degree to which Hg concentration in precipitation was enhanced. A detailed meteorological analysis of the individual precipitation events as well as overall implications of local deposition gradients will be discussed.

  1. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  2. The calculating study of the moisture transfer influence at the temperature field in a porous wet medium with internal heat sources

    NASA Astrophysics Data System (ADS)

    Kuzevanov, V. S.; Garyaev, A. B.; Zakozhurnikova, G. S.; Zakozhurnikov, S. S.

    2017-11-01

    A porous wet medium with solid and gaseous components, with distributed or localized heat sources was considered. The regimes of temperature changes at the heating at various initial material moisture were studied. Mathematical model was developed applied to the investigated wet porous multicomponent medium with internal heat sources, taking into account the transfer of the heat by heat conductivity with variable thermal parameters and porosity, heat transfer by radiation, chemical reactions, drying and moistening of solids, heat and mass transfer of volatile products of chemical reactions by flows filtration, transfer of moisture. The algorithm of numerical calculation and the computer program that implements the proposed mathematical model, allowing to study the dynamics of warming up at a local or distributed heat release, in particular the impact of the transfer of moisture in the medium on the temperature field were created. Graphs of temperature change were obtained at different points of the graphics with different initial moisture. Conclusions about the possible control of the regimes of heating a solid porous body by the initial moisture distribution were made.

  3. Maintaining activity engagement: individual differences in the process of self-regulating motivation.

    PubMed

    Sansone, Carol; Thoman, Dustin B

    2006-12-01

    Typically, models of self-regulation include motivation in terms of goals. Motivation is proposed to differ among individuals as a consequence of the goals they hold as well as how much they value those goals and expect to attain them. We suggest that goal-defined motivation is only one source of motivation critical for sustained engagement. A second source is the motivation that arises from the degree of interest experienced in the process of goal pursuit. Our model integrates both sources of motivation within the goal-striving process and suggests that individuals may actively monitor and regulate them. Conceptualizing motivation in terms of a self-regulatory process provides an organizing framework for understanding how individuals might differ in whether they experience interest while working toward goals, whether they persist without interest, and whether and how they try to create interest. We first present the self-regulation of motivation model and then review research illustrating how the consideration of individual differences at different points in the process allows a better understanding of variability in people's choices, efforts, and persistence over time.

  4. X-RAY VARIABILITY AND HARDNESS OF ESO 243-49 HLX-1: CLEAR EVIDENCE FOR SPECTRAL STATE TRANSITIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Servillat, Mathieu; Farrell, Sean A.; Lin Dacheng

    2011-12-10

    The ultraluminous X-ray (ULX) source ESO 243-49 HLX-1, which reaches a maximum luminosity of 10{sup 42} erg s{sup -1} (0.2-10 keV), currently provides the strongest evidence for the existence of intermediate-mass black holes (IMBHs). To study the spectral variability of the source, we conduct an ongoing monitoring campaign with the Swift X-ray Telescope (XRT), which now spans more than two years. We found that HLX-1 showed two fast rise and exponential decay type outbursts in the Swift XRT light curve with increases in the count rate of a factor {approx}40 separated by 375 {+-} 13 days. We obtained new XMM-Newtonmore » and Chandra dedicated pointings that were triggered at the lowest and highest luminosities, respectively. From spectral fitting, the unabsorbed luminosities ranged from 1.9 Multiplication-Sign 10{sup 40} to 1.25 Multiplication-Sign 10{sup 42} erg s{sup -1}. We confirm here the detection of spectral state transitions from HLX-1 reminiscent of Galactic black hole binaries (GBHBs): at high luminosities, the X-ray spectrum showed a thermal state dominated by a disk component with temperatures of 0.26 keV at most, and at low luminosities the spectrum is dominated by a hard power law with a photon index in the range 1.4-2.1, consistent with a hard state. The source was also observed in a state consistent with the steep power-law state, with a photon index of {approx}3.5. In the thermal state, the luminosity of the disk component appears to scale with the fourth power of the inner disk temperature, which supports the presence of an optically thick, geometrically thin accretion disk. The low fractional variability (rms of 9% {+-} 9%) in this state also suggests the presence of a dominant disk. The spectral changes and long-term variability of the source cannot be explained by variations of the beaming angle and are not consistent with the source being in a super-Eddington accretion state as is proposed for most ULX sources with lower luminosities. All this indicates that HLX-1 is an unusual ULX as it is similar to GBHBs, which have non-beamed and sub-Eddington emission, but with luminosities three orders of magnitude higher. In this picture, a lower limit on the mass of the black hole of >9000 M{sub Sun} can be derived, and the relatively low disk temperature in the thermal state also suggests the presence of an IMBH of a few 10{sup 3} M{sub Sun }.« less

  5. Advances in the regionalization approach: geostatistical techniques for estimating flood quantiles

    NASA Astrophysics Data System (ADS)

    Chiarello, Valentina; Caporali, Enrica; Matthies, Hermann G.

    2015-04-01

    The knowledge of peak flow discharges and associated floods is of primary importance in engineering practice for planning of water resources and risk assessment. Streamflow characteristics are usually estimated starting from measurements of river discharges at stream gauging stations. However, the lack of observations at site of interest as well as the measurement inaccuracies, bring inevitably to the necessity of developing predictive models. Regional analysis is a classical approach to estimate river flow characteristics at sites where little or no data exists. Specific techniques are needed to regionalize the hydrological variables over the considered area. Top-kriging or topological kriging, is a kriging interpolation procedure that takes into account the geometric organization and structure of hydrographic network, the catchment area and the nested nature of catchments. The continuous processes in space defined for the point variables are represented by a variogram. In Top-kriging, the measurements are not point values but are defined over a non-zero catchment area. Top-kriging is applied here over the geographical space of Tuscany Region, in Central Italy. The analysis is carried out on the discharge data of 57 consistent runoff gauges, recorded from 1923 to 2014. Top-kriging give also an estimation of the prediction uncertainty in addition to the prediction itself. The results are validated using a cross-validation procedure implemented in the package rtop of the open source statistical environment R The results are compared through different error measurement methods. Top-kriging seems to perform better in nested catchments and larger scale catchments but no for headwater or where there is a high variability for neighbouring catchments.

  6. Isolating the Role of Psychological Dysfunction in Smoking Cessation Failure: Relations of Personality and Psychopathology to Attaining Smoking Cessation Milestones

    PubMed Central

    Leventhal, Adam M.; Japuntich, Sandra J.; Piper, Megan E.; Jorenby, Douglas E.; Schlam, Tanya R.; Baker, Timothy B.

    2012-01-01

    Research exploring psychological dysfunction as a predictor of smoking cessation success may be limited by nonoptimal predictor variables (i.e., categorical psychodiagnostic measures vs. continuous personality-based manifestations of dysfunction) and imprecise outcomes (i.e., summative point prevalence abstinence vs. constituent cessation milestone measures). Accordingly, this study evaluated the unique and overlapping relations of broad-spectrum personality traits (positive emotionality, negative emotionality, and constraint) and past-year psychopathology (anxiety, mood, and substance use disorder) to point prevalence abstinence and three smoking cessation milestones: (1) initiating abstinence; (2) first lapse; and (3) transition from lapse to relapse. Participants were daily smokers (N=1365) enrolled in a smoking cessation treatment study. In single predictor regression models, each manifestation of internalizing dysfunction (lower positive emotionality, higher negative emotionality, and anxiety and mood disorder) predicted failure at one or more cessation milestone. In simultaneous predictor models, lower positive and higher negative emotionality significantly predicted failure to achieve milestones after controlling for psychopathology. Psychopathology did not predict any outcome when controlling for personality. Negative emotionality showed the most robust and consistent effects, significantly predicting failure to initiate abstinence, earlier lapse, and lower point prevalence abstinence rates. Substance use disorder and constraint did not predict cessation outcomes, and no single variable predicted lapse-to-relapse transition. These findings suggest that personality-related manifestations of internalizing dysfunction are more accurate markers of affective sources of relapse risk than mood and anxiety disorders. Further, individuals with high trait negative emotionality may require intensive intervention to promote the initiation and early maintenance of abstinence. PMID:22642858

  7. Energy conservation strategy in Hydraulic Power Packs using Variable Frequency Drive IOP Conference Series: Materials Science and Engineering

    NASA Astrophysics Data System (ADS)

    Ramesh, S.; Ashok, S. Denis; Nagaraj, Shanmukha; Reddy, M. Lohith Kumar; Naulakha, Niranjan Kumar; Adithyakumar, C. R.

    2018-02-01

    At present, energy consumption is to such an extent that if the same trend goes on then in the future at some point of time, the energy sources will all be exploited. Energy conservation in a hydraulic power pack refers to the reduction in the energy consumed by the power pack. Many experiments have been conducted to reduce the energy consumption and one of those methods is by introducing a variable frequency drive. The main objective of the present work is to reduce the energy consumed by the hydraulic power pack using variable frequency drive. Variable Frequency drive is used to vary the speed of the motor by receiving electrical signals from the pressure switch which acts as the feedback system. Using this concept, the speed of the motor can be varied between the specified limits. In the present work, a basic hydraulic power pack and a variable frequency drive based hydraulic power pack were designed and compared both of them with the results obtained. The comparison was based on the power consumed, rise in temperature, noise levels, and flow of oil through pressure relief valve, total oil flow during loading cycle. By comparing both the circuits, it is found that for the proposed system, consumption of power reduces by 78.4% and is as powerful as the present system.

  8. Characterizing Individual Differences in Functional Connectivity Using Dual-Regression and Seed-Based Approaches

    PubMed Central

    Smith, David V.; Utevsky, Amanda V.; Bland, Amy R.; Clement, Nathan; Clithero, John A.; Harsch, Anne E. W.; Carter, R. McKell; Huettel, Scott A.

    2014-01-01

    A central challenge for neuroscience lies in relating inter-individual variability to the functional properties of specific brain regions. Yet, considerable variability exists in the connectivity patterns between different brain areas, potentially producing reliable group differences. Using sex differences as a motivating example, we examined two separate resting-state datasets comprising a total of 188 human participants. Both datasets were decomposed into resting-state networks (RSNs) using a probabilistic spatial independent components analysis (ICA). We estimated voxelwise functional connectivity with these networks using a dual-regression analysis, which characterizes the participant-level spatiotemporal dynamics of each network while controlling for (via multiple regression) the influence of other networks and sources of variability. We found that males and females exhibit distinct patterns of connectivity with multiple RSNs, including both visual and auditory networks and the right frontal-parietal network. These results replicated across both datasets and were not explained by differences in head motion, data quality, brain volume, cortisol levels, or testosterone levels. Importantly, we also demonstrate that dual-regression functional connectivity is better at detecting inter-individual variability than traditional seed-based functional connectivity approaches. Our findings characterize robust—yet frequently ignored—neural differences between males and females, pointing to the necessity of controlling for sex in neuroscience studies of individual differences. Moreover, our results highlight the importance of employing network-based models to study variability in functional connectivity. PMID:24662574

  9. Modeling the cardiovascular system using a nonlinear additive autoregressive model with exogenous input

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Suhrbier, A.; Malberg, H.; Penzel, T.; Bretthauer, G.; Kurths, J.; Wessel, N.

    2008-07-01

    The parameters of heart rate variability and blood pressure variability have proved to be useful analytical tools in cardiovascular physics and medicine. Model-based analysis of these variabilities additionally leads to new prognostic information about mechanisms behind regulations in the cardiovascular system. In this paper, we analyze the complex interaction between heart rate, systolic blood pressure, and respiration by nonparametric fitted nonlinear additive autoregressive models with external inputs. Therefore, we consider measurements of healthy persons and patients suffering from obstructive sleep apnea syndrome (OSAS), with and without hypertension. It is shown that the proposed nonlinear models are capable of describing short-term fluctuations in heart rate as well as systolic blood pressure significantly better than similar linear ones, which confirms the assumption of nonlinear controlled heart rate and blood pressure. Furthermore, the comparison of the nonlinear and linear approaches reveals that the heart rate and blood pressure variability in healthy subjects is caused by a higher level of noise as well as nonlinearity than in patients suffering from OSAS. The residue analysis points at a further source of heart rate and blood pressure variability in healthy subjects, in addition to heart rate, systolic blood pressure, and respiration. Comparison of the nonlinear models within and among the different groups of subjects suggests the ability to discriminate the cohorts that could lead to a stratification of hypertension risk in OSAS patients.

  10. On climate prediction: how much can we expect from climate memory?

    NASA Astrophysics Data System (ADS)

    Yuan, Naiming; Huang, Yan; Duan, Jianping; Zhu, Congwen; Xoplaki, Elena; Luterbacher, Jürg

    2018-03-01

    Slowing variability in climate system is an important source of climate predictability. However, it is still challenging for current dynamical models to fully capture the variability as well as its impacts on future climate. In this study, instead of simulating the internal multi-scale oscillations in dynamical models, we discussed the effects of internal variability in terms of climate memory. By decomposing climate state x(t) at a certain time point t into memory part M(t) and non-memory part ɛ (t) , climate memory effects from the past 30 years on climate prediction are quantified. For variables with strong climate memory, high variance (over 20% ) in x(t) is explained by the memory part M(t), and the effects of climate memory are non-negligible for most climate variables, but the precipitation. Regarding of multi-steps climate prediction, a power law decay of the explained variance was found, indicating long-lasting climate memory effects. The explained variances by climate memory can remain to be higher than 10% for more than 10 time steps. Accordingly, past climate conditions can affect both short (monthly) and long-term (interannual, decadal, or even multidecadal) climate predictions. With the memory part M(t) precisely calculated from Fractional Integral Statistical Model, one only needs to focus on the non-memory part ɛ (t) , which is an important quantity that determines climate predictive skills.

  11. Expanding the functionality of speech recognition in radiology: creating a real-time methodology for measurement and analysis of occupational stress and fatigue.

    PubMed

    Reiner, Bruce I

    2013-02-01

    While occupational stress and fatigue have been well described throughout medicine, the radiology community is particularly susceptible due to declining reimbursements, heightened demands for service deliverables, and increasing exam volume and complexity. The resulting occupational stress can be variable in nature and dependent upon a number of intrinsic and extrinsic stressors. Intrinsic stressors largely account for inter-radiologist stress variability and relate to unique attributes of the radiologist such as personality, emotional state, education/training, and experience. Extrinsic stressors may account for intra-radiologist stress variability and include cumulative workload and task complexity. The creation of personalized stress profiles creates a mechanism for accounting for both inter- and intra-radiologist stress variability, which is essential in creating customizable stress intervention strategies. One viable option for real-time occupational stress measurement is voice stress analysis, which can be directly implemented through existing speech recognition technology and has been proven to be effective in stress measurement and analysis outside of medicine. This technology operates by detecting stress in the acoustic properties of speech through a number of different variables including duration, glottis source factors, pitch distribution, spectral structure, and intensity. The correlation of these speech derived stress measures with outcomes data can be used to determine the user-specific inflection point at which stress becomes detrimental to clinical performance.

  12. Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.

    PubMed

    Feng, Bing; Zeng, Gengsheng L

    2014-04-10

    A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.

  13. Automatic classification of time-variable X-ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lo, Kitty K.; Farrell, Sean; Murphy, Tara

    2014-05-01

    To maximize the discovery potential of future synoptic surveys, especially in the field of transient science, it will be necessary to use automatic classification to identify some of the astronomical sources. The data mining technique of supervised classification is suitable for this problem. Here, we present a supervised learning method to automatically classify variable X-ray sources in the Second XMM-Newton Serendipitous Source Catalog (2XMMi-DR2). Random Forest is our classifier of choice since it is one of the most accurate learning algorithms available. Our training set consists of 873 variable sources and their features are derived from time series, spectra, andmore » other multi-wavelength contextual information. The 10 fold cross validation accuracy of the training data is ∼97% on a 7 class data set. We applied the trained classification model to 411 unknown variable 2XMM sources to produce a probabilistically classified catalog. Using the classification margin and the Random Forest derived outlier measure, we identified 12 anomalous sources, of which 2XMM J180658.7–500250 appears to be the most unusual source in the sample. Its X-ray spectra is suggestive of a ultraluminous X-ray source but its variability makes it highly unusual. Machine-learned classification and anomaly detection will facilitate scientific discoveries in the era of all-sky surveys.« less

  14. Estimation of Traffic Variables Using Point Processing Techniques

    DOT National Transportation Integrated Search

    1978-05-01

    An alternative approach to estimating aggregate traffic variables on freeways--spatial mean velocity and density--is presented. Vehicle arrival times at a given location on a roadway, typically a presence detector, are regarded as a point or counting...

  15. Modeling of an Adjustable Beam Solid State Light Project

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2015-01-01

    This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.

  16. Variability of the 2014-present inflation source at Mauna Loa volcano revealed using time-dependent modeling

    NASA Astrophysics Data System (ADS)

    Johanson, I. A.; Miklius, A.; Okubo, P.; Montgomery-Brown, E. K.

    2017-12-01

    Mauna Loa volcano is the largest active volcano on earth and in the 20thcentury produced roughly one eruption every seven years. The 33-year quiescence since its last eruption 1984 has been punctuated by three inflation episodes where magma likely entered the shallow plumbing system, but was not erupted. The most recent began in 2014 and is ongoing. Unlike prior inflation episodes, the current one is accompanied by a significant increase in shallow seismicity, a pattern that is similar to earlier pre-eruptive periods. We apply the Kalman filter based Network Inversion Filter (NIF) to the 2014-present inflation episode using data from a 27 station continuous GPS network on Mauna Loa. The model geometry consists of a point volume source and tabular, dike-like body, which have previously been shown to provide a good fit to deformation data from a 2004-2009 inflation episode. The tabular body is discretized into 1km x 1km segments. For each day, the NIF solves for the rates of opening on the tabular body segments (subject to smoothing and positivity constraints), volume change rate in the point source, and slip rate on a deep décollement fault surface, which is constrained to a constant (no transient slip allowed). The Kalman filter in the NIF provides for smoothing both forwards and backwards in time. The model shows that the 2014-present inflation episode occurred as several sub-events, rather than steady inflation. It shows some spatial variability in the location of the inflation sub-events. In the model, opening in the tabular body is initially concentrated below the volcano's summit, in an area roughly outlined by shallow seismicity. In October, 2015 opening in the tabular body shifts to be centered beneath the southwest portion of the summit and seismicity becomes concentrated in this area. By late 2016, the opening rate on the tabular body decreases and is once again under the central part of summit. This modeling approach has allowed us to track these features on a daily basis and capture the evolution of the inflation episode as it occurs.

  17. Evidence for an Intermediate Mass Black Hole in NGC 5408 X-1

    NASA Technical Reports Server (NTRS)

    Strohmayer, Tod E.; Mushotzky, Richard F.

    2009-01-01

    We report the discovery with XMM-Newton of correlated spectral and timing behavior in the ultraluminous X-ray source (ULX) NGC 5408 X-1. An approx. 100 ksec pointing with XMM/Newton obtained in January, 2008 reveals a strong 10 mHz QPO in the > 1 keV flux, as well as flat-topped, band limited noise breaking to a power law. The energy spectrum is again dominated by two components, a 0.16 keV thermal disk and a power-law with an index of approx. 2.5. These new measurements, combined with results from our previous January 2006 pointing in which we first detected QPOs, show for the first time in a ULX a pattern of spectral and temporal correlations strongly analogous to that seen in Galactic black hole sources, but at much higher X-ray luminosity and longer characteristic time-scales. We find that the QPO frequency is proportional to the inferred disk flux, while the QPO and broad-band noise amplitude (root mean squared, rms) are inversely proportional to the disk flux. Assuming that QPO frequency scales inversely with black hole mass at a given power-law spectral index we derive mass estimates using the observed QPO frequency - spectral index relations from five stellar-mass black hole systems with dynamical mass constraints. The results from all sources are consistent with a mass range for NGC 5408 X-1 from 1000 - 9000 Stellar mass. We argue that these are conservative limits, and a more likely range is from 2000 - 5000 Stellar mass. Moreover, the recent relation from Gierlinski et al. that relates black hole mass to the strength of variability at high frequencies (above the break in the power spectrum), and the variability plane results of McHardy et al. and Koerding et al., are also suggestive of such a. high mass for NGC 5408 X-1. Importantly, none of the above estimates appears consistent with a black hole mass less than approx. 1000 Stellar mass for NGC 5408 X-1. We argue that these new findings strongly support the conclusion that NGC 5408 X-1 harbors an intermediate mass black hole.

  18. Extreme AGN Captured in a Low State by XMM-Newton and NuSTAR

    NASA Astrophysics Data System (ADS)

    Frederick, Sara; Kara, Erin; Reynolds, Christopher S.

    2018-01-01

    The most variable active galactic nuclei (AGN), taken together, are a compelling wellspring of interesting accretion-related phenomena and can exhibit dramatic variability in the X-ray band down to timescales of a few minutes. We present the exemplifying case study of 1H 1934-063 (z = 0.0102), a narrow-line Seyfert I (NLS1) that is among the most variable AGN ever observed with XMM-Newton. We present spectroscopic and temporal analyses of a concurrent XMM-Newton and NuSTAR 120 ks observation, during which the source exhibited a steep (factor of 1.5) plummet and subsequent full recovery of flux that we explore in detail. Combined spectral and timing results point to a dramatic change in the continuum on timescales as short as a few ks. Similar to other highly variable Seyfert 1s, this AGN is X-ray bright and displays strong reflection spectral features. We find agreement with a change in the continuum, and we rule out absorption as the cause for this dramatic variability that is observed even at NuSTAR energies. We compare measurements from detailed time-resolved spectral fitting with Fourier-based timing results to constrain coronal geometry, dynamics, and emission/absorption processes dictating the nature of this variability. We also announce the discovery of a Fe-K time lag between the hard X-ray continuum emission (1-4 keV) and relativistically-blurred reprocessing by the inner accretion flow (0.3-1 keV).

  19. PI-line-based image reconstruction in helical cone-beam computed tomography with a variable pitch.

    PubMed

    Zou, Yu; Pan, Xiaochuan; Xia, Dan; Wang, Ge

    2005-08-01

    Current applications of helical cone-beam computed tomography (CT) involve primarily a constant pitch where the translating speed of the table and the rotation speed of the source-detector remain constant. However, situations do exist where it may be more desirable to use a helical scan with a variable translating speed of the table, leading a variable pitch. One of such applications could arise in helical cone-beam CT fluoroscopy for the determination of vascular structures through real-time imaging of contrast bolus arrival. Most of the existing reconstruction algorithms have been developed only for helical cone-beam CT with constant pitch, including the backprojection-filtration (BPF) and filtered-backprojection (FBP) algorithms that we proposed previously. It is possible to generalize some of these algorithms to reconstruct images exactly for helical cone-beam CT with a variable pitch. In this work, we generalize our BPF and FBP algorithms to reconstruct images directly from data acquired in helical cone-beam CT with a variable pitch. We have also performed a preliminary numerical study to demonstrate and verify the generalization of the two algorithms. The results of the study confirm that our generalized BPF and FBP algorithms can yield exact reconstruction in helical cone-beam CT with a variable pitch. It should be pointed out that our generalized BPF algorithm is the only algorithm that is capable of reconstructing exactly region-of-interest image from data containing transverse truncations.

  20. Searches for point sources in the Galactic Center region

    NASA Astrophysics Data System (ADS)

    di Mauro, Mattia; Fermi-LAT Collaboration

    2017-01-01

    Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.

  1. Variable energy constant current accelerator structure

    DOEpatents

    Anderson, Oscar A.

    1990-01-01

    A variable energy, constant current ion beam accelerator structure is disclosed comprising an ion source capable of providing the desired ions, a pre-accelerator for establishing an initial energy level, a matching/pumping module having means for focusing means for maintaining the beam current, and at least one main accelerator module for continuing beam focus, with means capable of variably imparting acceleration to the beam so that a constant beam output current is maintained independent of the variable output energy. In a preferred embodiment, quadrupole electrodes are provided in both the matching/pumping module and the one or more accelerator modules, and are formed using four opposing cylinder electrodes which extend parallel to the beam axis and are spaced around the beam at 90.degree. intervals with opposing electrodes maintained at the same potential. Adjacent cylinder electrodes of the quadrupole structure are maintained at different potentials to thereby reshape the cross section of the charged particle beam to an ellipse in cross section at the mid point along each quadrupole electrode unit in the accelerator modules. The beam is maintained in focus by alternating the major axis of the ellipse along the x and y axis respectively at adjacent quadrupoles. In another embodiment, electrostatic ring electrodes may be utilized instead of the quadrupole electrodes.

  2. Osmotic load from glucose polymers.

    PubMed

    Koo, W W; Poh, D; Leong, M; Tam, Y K; Succop, P; Checkland, E G

    1991-01-01

    Glucose polymer is a carbohydrate source with variable chain lengths of glucose units which may result in variable osmolality. The osmolality of two commercial glucose polymers was measured in reconstituted powder infant formulas, and the change in osmolality of infant milk formulas at the same increases in energy density (67 kcal/dL to 81 and 97 kcal/dL) from the use of additional milk powder or glucose polymers was compared. All samples were prepared from powders (to nearest 0.1 mg), and osmolality was measured by freezing point depression. For both glucose polymers the within-batch variability of the measured osmolality was less than 3.5%, and between-batch variability of the measured osmolality was less than 9.6%. The measured osmolality varies linearly with energy density (p less than 0.001) and was highest in infant formula reconstituted from milk powder alone. However, there exist significant differences in the measured osmolality between different glucose polymer preparations. At high energy densities (greater than or equal to 97 kcal/dL), infant milk formulas prepared with milk powder alone or with the addition of certain glucose polymer preparation may have high osmolality (greater than or equal to 450 mosm/kg) and theoretically predispose the infant to complications of hyperosmotic feeds.

  3. Analysis of non-point and point source pollution in China: case study in Shima Watershed in Guangdong Province

    NASA Astrophysics Data System (ADS)

    Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei

    2013-09-01

    China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.

  4. Open-Source Data and the Study of Homicide.

    PubMed

    Parkin, William S; Gruenewald, Jeff

    2015-07-20

    To date, no discussion has taken place in the social sciences as to the appropriateness of using open-source data to augment, or replace, official data sources in homicide research. The purpose of this article is to examine whether open-source data have the potential to be used as a valid and reliable data source in testing theory and studying homicide. Official and open-source homicide data were collected as a case study in a single jurisdiction over a 1-year period. The data sets were compared to determine whether open-sources could recreate the population of homicides and variable responses collected in official data. Open-source data were able to replicate the population of homicides identified in the official data. Also, for every variable measured, the open-sources captured as much, or more, of the information presented in the official data. Also, variables not available in official data, but potentially useful for testing theory, were identified in open-sources. The results of the case study show that open-source data are potentially as effective as official data in identifying individual- and situational-level characteristics, provide access to variables not found in official homicide data, and offer geographic data that can be used to link macro-level characteristics to homicide events. © The Author(s) 2015.

  5. EUV brightness variations in the quiet Sun

    NASA Astrophysics Data System (ADS)

    Brković, A.; Rüedi, I.; Solanki, S. K.; Fludra, A.; Harrison, R. A.; Huber, M. C. E.; Stenflo, J. O.; Stucki, K.

    2000-01-01

    The Coronal Diagnostic Spectrometer (CDS) onboard the SOHO satellite has been used to obtain movies of quiet Sun regions at disc centre. These movies were used to study brightness variations of solar features at three different temperatures sampled simultaneously in the chromospheric He I 584.3 Ä (2 * 104 K), the transition region O V 629.7 Ä (2.5 * 105 K) and coronal Mg IX 368.1 Ä (106 K) lines. In all parts of the quiet Sun, from darkest intranetwork to brightest network, we find significant variability in the He I and O V line, while the variability in the Mg IX line is more marginal. The relative variability, defined by rms of intensity normalised to the local intensity, is independent of brightness and strongest in the transition region line. Thus the relative variability is the same in the network and the intranetwork. More than half of the points on the solar surface show a relative variability, determined over a period of 4 hours, greater than 15.5% for the O V line, but only 5% of the points exhibit a variability above 25%. Most of the variability appears to take place on time-scales between 5 and 80 minutes for the He I and O V lines. Clear signs of ``high variability'' events are found. For these events the variability as a function of time seen in the different lines shows a good correlation. The correlation is higher for more variable events. These events coincide with the (time averaged) brightest points on the solar surface, i.e. they occur in the network. The spatial positions of the most variable points are identical in all the lines.

  6. Point and Compact Hα Sources in the Interior of M33

    NASA Astrophysics Data System (ADS)

    Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.

    2017-12-01

    A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.

  7. [Estimation of nonpoint source pollutant loads and optimization of the best management practices (BMPs) in the Zhangweinan River basin].

    PubMed

    Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin

    2013-03-01

    One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.

  8. Extrapolation of Functions of Many Variables by Means of Metric Analysis

    NASA Astrophysics Data System (ADS)

    Kryanev, Alexandr; Ivanov, Victor; Romanova, Anastasiya; Sevastianov, Leonid; Udumyan, David

    2018-02-01

    The paper considers a problem of extrapolating functions of several variables. It is assumed that the values of the function of m variables at a finite number of points in some domain D of the m-dimensional space are given. It is required to restore the value of the function at points outside the domain D. The paper proposes a fundamentally new method for functions of several variables extrapolation. In the presented paper, the method of extrapolating a function of many variables developed by us uses the interpolation scheme of metric analysis. To solve the extrapolation problem, a scheme based on metric analysis methods is proposed. This scheme consists of two stages. In the first stage, using the metric analysis, the function is interpolated to the points of the domain D belonging to the segment of the straight line connecting the center of the domain D with the point M, in which it is necessary to restore the value of the function. In the second stage, based on the auto regression model and metric analysis, the function values are predicted along the above straight-line segment beyond the domain D up to the point M. The presented numerical example demonstrates the efficiency of the method under consideration.

  9. A stochastic-geometric model of soil variation in Pleistocene patterned ground

    NASA Astrophysics Data System (ADS)

    Lark, Murray; Meerschman, Eef; Van Meirvenne, Marc

    2013-04-01

    In this paper we examine the spatial variability of soil in parent material with complex spatial structure which arises from complex non-linear geomorphic processes. We show that this variability can be better-modelled by a stochastic-geometric model than by a standard Gaussian random field. The benefits of the new model are seen in the reproduction of features of the target variable which influence processes like water movement and pollutant dispersal. Complex non-linear processes in the soil give rise to properties with non-Gaussian distributions. Even under a transformation to approximate marginal normality, such variables may have a more complex spatial structure than the Gaussian random field model of geostatistics can accommodate. In particular the extent to which extreme values of the variable are connected in spatially coherent regions may be misrepresented. As a result, for example, geostatistical simulation generally fails to reproduce the pathways for preferential flow in an environment where coarse infill of former fluvial channels or coarse alluvium of braided streams creates pathways for rapid movement of water. Multiple point geostatistics has been developed to deal with this problem. Multiple point methods proceed by sampling from a set of training images which can be assumed to reproduce the non-Gaussian behaviour of the target variable. The challenge is to identify appropriate sources of such images. In this paper we consider a mode of soil variation in which the soil varies continuously, exhibiting short-range lateral trends induced by local effects of the factors of soil formation which vary across the region of interest in an unpredictable way. The trends in soil variation are therefore only apparent locally, and the soil variation at regional scale appears random. We propose a stochastic-geometric model for this mode of soil variation called the Continuous Local Trend (CLT) model. We consider a case study of soil formed in relict patterned ground with pronounced lateral textural variations arising from the presence of infilled ice-wedges of Pleistocene origin. We show how knowledge of the pedogenetic processes in this environment, along with some simple descriptive statistics, can be used to select and fit a CLT model for the apparent electrical conductivity (ECa) of the soil. We use the model to simulate realizations of the CLT process, and compare these with realizations of a fitted Gaussian random field. We show how statistics that summarize the spatial coherence of regions with small values of ECa, which are expected to have coarse texture and so larger saturated hydraulic conductivity, are better reproduced by the CLT model than by the Gaussian random field. This suggests that the CLT model could be used to generate an unlimited supply of training images to allow multiple point geostatistical simulation or prediction of this or similar variables.

  10. Spectral Remote Sensing of Dust Sources on the U.S. Great Plains from 1930s Panchromatic Aerial Phtography

    NASA Astrophysics Data System (ADS)

    Bolles, K.; Forman, S. L.

    2017-12-01

    Understanding the spatiotemporal dynamics of dust sources is essential to accurately quantify the various impacts of dust on the Earth system; however, a persistent deficiency in modeling dust emission is detailed knowledge of surface texture, geomorphology, and location of dust emissive surfaces, which strongly influence the effects of wind erosion. Particle emission is closely linked to both climatic and physical surface factors - interdependent variables that respond to climate nonlinearly and are mitigated by variability in land use or management practice. Recent efforts have focused on development of a preferential dust source (PDS) identification scheme to improve global dust-cycle models, which posits certain surfaces are more likely to emit dust than others, dependent upon associated sediment texture and geomorphological limitations which constrain sediment supply and availability. In this study, we outline an approach to identify and verify the physical properties and distribution of dust emissive surfaces in the U.S. Great Plains from historical aerial imagery in order to establish baseline records of dust sources, associated erodibility, and spatiotemporal variability, prior to the satellite era. We employ a multi-criteria, spatially-explicit model to identify counties that are "representative" of the broader landscape on the Great Plains during the 1930s. Parameters include: percentage of county cultivated and uncultivated per the 1935 Agricultural Census, average soil sand content, mean annual Palmer Drought Severity Index (PDSI), maximum annual temperature and percent difference to the 30-year normal maximum temperature, and annual precipitation and percent difference to the 30-year normal precipitation level. Within these areas we generate random points to select areas for photo reproduction. Selected frames are photogrammetrically scanned at 1200 dpi, radiometrically corrected, mosaicked and georectified to create an IKONOS-equivalent image. Gray-level co-occurrence matrices are calculated in a 3x3 moving window to determine textural properties of the mosaic and delineate bare surfaces of different sedimentological properties. Field stratigraphic assessments and spatially-referenced historical data are integrated within ArcGIS to ground-truth imagery.

  11. Identifying Variability in Mental Models Within and Between Disciplines Caring for the Cardiac Surgical Patient.

    PubMed

    Brown, Evans K H; Harder, Kathleen A; Apostolidou, Ioanna; Wahr, Joyce A; Shook, Douglas C; Farivar, R Saeid; Perry, Tjorvi E; Konia, Mojca R

    2017-07-01

    The cardiac operating room is a complex environment requiring efficient and effective communication between multiple disciplines. The objectives of this study were to identify and rank critical time points during the perioperative care of cardiac surgical patients, and to assess variability in responses, as a correlate of a shared mental model, regarding the importance of these time points between and within disciplines. Using Delphi technique methodology, panelists from 3 institutions were tasked with developing a list of critical time points, which were subsequently assigned to pause point (PP) categories. Panelists then rated these PPs on a 100-point visual analog scale. Descriptive statistics were expressed as percentages, medians, and interquartile ranges (IQRs). We defined low response variability between panelists as an IQR ≤ 20, moderate response variability as an IQR > 20 and ≤ 40, and high response variability as an IQR > 40. Panelists identified a total of 12 PPs. The PPs identified by the highest number of panelists were (1) before surgical incision, (2) before aortic cannulation, (3) before cardiopulmonary bypass (CPB) initiation, (4) before CPB separation, and (5) at time of transfer of care from operating room (OR) to intensive care unit (ICU) staff. There was low variability among panelists' ratings of the PP "before surgical incision," moderate response variability for the PPs "before separation from CPB," "before transfer from OR table to bed," and "at time of transfer of care from OR to ICU staff," and high response variability for the remaining 8 PPs. In addition, the perceived importance of each of these PPs varies between disciplines and between institutions. Cardiac surgical providers recognize distinct critical time points during cardiac surgery. However, there is a high degree of variability within and between disciplines as to the importance of these times, suggesting an absence of a shared mental model among disciplines caring for cardiac surgical patients during the perioperative period. A lack of a shared mental model could be one of the factors contributing to preventable errors in cardiac operating rooms.

  12. Outlier Resistant Predictive Source Encoding for a Gaussian Stationary Nominal Source.

    DTIC Science & Technology

    1987-09-18

    breakdown point and influence function . The proposed sequence of predictive encoders attains strictly positive breakdown point and uniformly bounded... influence function , at the expense of increased mean difference-squared distortion and differential entropy, at the Gaussian nominal source.

  13. Impact of sampling techniques on measured stormwater quality data for small streams

    USGS Publications Warehouse

    Harmel, R.D.; Slade, R.M.; Haney, R.L.

    2010-01-01

    Science-based sampling methodologies are needed to enhance water quality characterization for setting appropriate water quality standards, developing Total Maximum Daily Loads, and managing nonpoint source pollution. Storm event sampling, which is vital for adequate assessment of water quality in small (wadeable) streams, is typically conducted by manual grab or integrated sampling or with an automated sampler. Although it is typically assumed that samples from a single point adequately represent mean cross-sectional concentrations, especially for dissolved constituents, this assumption of well-mixed conditions has received limited evaluation. Similarly, the impact of temporal (within-storm) concentration variability is rarely considered. Therefore, this study evaluated differences in stormwater quality measured in small streams with several common sampling techniques, which in essence evaluated within-channel and within-storm concentration variability. Constituent concentrations from manual grab samples and from integrated samples were compared for 31 events, then concentrations were also compared for seven events with automated sample collection. Comparison of sampling techniques indicated varying degrees of concentration variability within channel cross sections for both dissolved and particulate constituents, which is contrary to common assumptions of substantial variability in particulate concentrations and of minimal variability in dissolved concentrations. Results also indicated the potential for substantial within-storm (temporal) concentration variability for both dissolved and particulate constituents. Thus, failing to account for potential cross-sectional and temporal concentration variability in stormwater monitoring projects can introduce additional uncertainty in measured water quality data. Copyright ?? 2010 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  14. Observing the Fast X-ray Spectral Variability of NLS1 1H1934-063 with XMM-Newton and NuSTAR

    NASA Astrophysics Data System (ADS)

    Frederick, Sara; Kara, Erin; Reynolds, Christopher S.

    2017-08-01

    The most variable active galactic nuclei (AGN), taken together, are a compelling wellspring of interesting accretion-related phenomena. They can exhibit dramatic variability in the X-ray band on a range of timescales down to a few minutes. We present the exemplifying case study of 1H1934-063 (z = 0.0102), a narrow-line Seyfert I (NLS1) that is among the most variable AGN ever observed with XMM-Newton. We present spectral and temporal analyses of a concurrent XMM-Newton and NuSTAR observation taken in 2015 and lasting 120 ks, during which the source exhibited a steep (factor of 1.5) plummet and subsequent full recovery of flux that we explore in detail here. Combined spectral and timing results point to a dramatic change in the continuum on timescales as short as a few ks. Similar to other highly variable Seyfert 1s, this AGN is quite X-ray bright and displays strong reflection spectral features. We find agreement with a change in the continuum, and we rule out absorption as the cause for this dramatic variability observed even at NuSTAR energies. We compare detailed time-resolved spectral fitting with Fourier-based timing analysis in order to constrain coronal geometry, dynamics, and emission/absorption processes dictating the nature of this variability. We also announce the discovery of a Fe-K time lag between the hard X-ray continuum emission (1 - 4 keV) and its relativistically-blurred reflection off the inner accretion flow (0.3 - 1 keV).

  15. Ammonia and amino acid profiles in liver cirrhosis: effects of variables leading to hepatic encephalopathy.

    PubMed

    Holecek, Milan

    2015-01-01

    Hyperammonemia and severe amino acid imbalances play central role in hepatic encephalopathy (HE). In the article is demonstrated that the main source of ammonia in cirrhotic subjects is activated breakdown of glutamine (GLN) in enterocytes and the kidneys and the main source of GLN is ammonia detoxification to GLN in the brain and skeletal muscle. Branched-chain amino acids (BCAA; valine, leucine, and isoleucine) decrease due to activated GLN synthesis in muscle. Aromatic amino acids (AAA; phenylalanine, tyrosine, and tryptophan) and methionine increase due to portosystemic shunts and reduced ability of diseased liver. The effects on aminoacidemia of the following variables that may affect the course of liver disease are discussed: nutritional status, starvation, protein intake, inflammation, acute hepatocellular damage, bleeding from varices, portosystemic shunts, hepatic cancer, and renal failure. It is concluded that (1) neither ammonia nor amino acid concentrations correlate closely with the severity of liver disease; (2) BCAA/AAA ratio could be used as a good index of liver impairment and for early detection of derangements in amino acid metabolism; (3) variables potentially leading to overt encephalopathy exert substantial but uneven effects; and (4) careful monitoring of ammonia and aminoacidemia may discover important break points in the course of liver disease and indicate appropriate therapeutic approach. Of special importance might be isoleucine deficiency in bleeding from varices, arginine deficiency in sepsis, and a marked rise of GLN and ammonia levels that may appear in all events leading to HE. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Quantifying co-benefits of source-specific CO2 emission reductions in Canada and the US: An adjoint sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Soltanzadeh, M.; Pappin, A. J.; Hakami, A.; Turner, M. D.; Capps, S.; Henze, D. K.; Percell, P.; Bash, J. O.; Napelenok, S. L.; Pinder, R. W.; Russell, A. G.; Nenes, A.; Baek, J.; Carmichael, G. R.; Stanier, C. O.; Chai, T.; Byun, D.; Fahey, K.; Resler, J.; Mashayekhi, R.

    2016-12-01

    Scenario-based studies evaluate air quality co-benefits by adopting collective measures introduced under a climate policy scenario cannot distinguish between benefits accrued from CO2 reductions among sources of different types and at different locations. Location and sector dependencies are important factors that can be captured in an adjoint-based analysis of CO2 reduction co-benefits. The present study aims to quantify how the ancillary benefits of reducing criteria co-pollutants vary spatially and by sector. The adjoint of USEPA's CMAQ was applied to quantify the health benefits associated with emission reduction of criteria pollutants (NOX) in on-road mobile, Electric Generation Units (EGUs), and other select sectors on a location-by-location basis across the US and Canada. These health benefits are then converted to CO2 emission reduction co-benefits by accounting for source-specific emission rates of criteria pollutants in comparison to CO2. We integrate the results from the adjoint of CMAQ with emission estimates from 2011 NEI at the county level, and point source data from EPA's Air Markets Program Data and National Pollutant Release Inventory (NPRI) for Canada. Our preliminary results show that the monetized health benefits (due to averted chronic mortality) associated with reductions of 1 ton of CO2 emissions is up to 65/ton in Canada and 200/ton in US for mobile on-road sector. For EGU sources, co-benefits are estimated at up to 100/ton and 10/ton for the US and Canada respectively. For Canada, the calculated co-benefits through gaseous pollutants including NOx is larger than those through PM2.5 due to the official association between NO2 exposure and chronic mortality. Calculated co-benefits show a great deal of spatial variability across emission locations for different sectors and sub-sectors. Implications of such spatial variability in devising control policy options that effectively address both climate and air quality objectives will be discussed.

  17. Mapping algorithm for freeform construction using non-ideal light sources

    NASA Astrophysics Data System (ADS)

    Li, Chen; Michaelis, D.; Schreiber, P.; Dick, L.; Bräuer, A.

    2015-09-01

    Using conventional mapping algorithms for the construction of illumination freeform optics' arbitrary target pattern can be obtained for idealized sources, e.g. collimated light or point sources. Each freeform surface element generates an image point at the target and the light intensity of an image point is corresponding to the area of the freeform surface element who generates the image point. For sources with a pronounced extension and ray divergence, e.g. an LED with a small source-freeform-distance, the image points are blurred and the blurred patterns might be different between different points. Besides, due to Fresnel losses and vignetting, the relationship between light intensity of image points and area of freeform surface elements becomes complicated. These individual light distributions of each freeform element are taken into account in a mapping algorithm. To this end the method of steepest decent procedures are used to adapt the mapping goal. A structured target pattern for a optics system with an ideal source is computed applying corresponding linear optimization matrices. Special weighting factor and smoothing factor are included in the procedures to achieve certain edge conditions and to ensure the manufacturability of the freefrom surface. The corresponding linear optimization matrices, which are the lighting distribution patterns of each of the freeform surface elements, are gained by conventional raytracing with a realistic source. Nontrivial source geometries, like LED-irregularities due to bonding or source fine structures, and a complex ray divergence behavior can be easily considered. Additionally, Fresnel losses, vignetting and even stray light are taken into account. After optimization iterations, with a realistic source, the initial mapping goal can be achieved by the optics system providing a structured target pattern with an ideal source. The algorithm is applied to several design examples. A few simple tasks are presented to discussed the ability and limitation of the this mothed. It is also presented that a homogeneous LED-illumination system design, in where, with a strongly tilted incident direction, a homogeneous distribution is achieved with a rather compact optics system and short working distance applying a relatively large LED source. It is shown that the lighting distribution patterns from the freeform surface elements can be significantly different from the others. The generation of a structured target pattern, applying weighting factor and smoothing factor, are discussed. Finally, freeform designs for much more complex sources like clusters of LED-sources are presented.

  18. A selective array activation method for the generation of a focused source considering listening position.

    PubMed

    Song, Min-Ho; Choi, Jung-Woo; Kim, Yang-Hann

    2012-02-01

    A focused source can provide an auditory illusion of a virtual source placed between the loudspeaker array and the listener. When a focused source is generated by time-reversed acoustic focusing solution, its use as a virtual source is limited due to artifacts caused by convergent waves traveling towards the focusing point. This paper proposes an array activation method to reduce the artifacts for a selected listening point inside an array of arbitrary shape. Results show that energy of convergent waves can be reduced up to 60 dB for a large region including the selected listening point. © 2012 Acoustical Society of America

  19. Nitrogen speciation and phosphorus fractionation dynamics in a lowland Chalk catchment.

    PubMed

    Yates, C A; Johnes, P J

    2013-02-01

    A detailed analysis of temporal and spatial trends in nitrogen (N) speciation and phosphorus (P) fractionation in the Wylye, a lowland Chalk sub-catchment of the Hampshire Avon, UK is presented, identifying the sources contributing to nutrient enrichment, and temporal variability in the fractionation of nutrients in transit from headwaters to lower reaches of the river. Samples were collected weekly from ten monitoring stations with daily sampling at three further sites over one year, and monthly inorganic N and total reactive P (TRP) concentrations at three of the ten weekly monitoring stations over a ten year period are also presented. The data indicate significant daily and seasonal variation in nutrient fractionation in the water column, resulting from plant uptake of dissolved organic and inorganic nutrient fractions in the summer months, increased delivery of both N and P from diffuse sources in the autumn to winter period and during high flow events, and lack of dilution of point source discharges to the Wylye from septic tank, small package Sewage Treatment Works (STW) and urban Waste Water Treatment Works (WwTW) during the summer low flow period. Weekly data show that contributing source areas vary along the river with headwater N and P strongly influenced by diffuse inorganic N and particulate P fluxes, and SRP and organic-rich point source contributions from STW and WwTW having a greater influence in the lower reaches. Long-term data show a decrease in TRP concentrations at all three monitoring stations, with the most pronounced decrease occurring downstream from Warminster WwTW, following the introduction of P stripping at the works in 2001. Inorganic N demonstrates no statistically significant change over the ten year period of record in the rural headwaters, but an increase in the lower reaches downstream from the WwTW which may be due to urban expansion in the lower catchment. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Chandra Observations of the M31

    NASA Technical Reports Server (NTRS)

    Garcia, Michael; Lavoie, Anthony R. (Technical Monitor)

    2000-01-01

    We report on Chandra observations of the nearest Spiral Galaxy, M3l, The nuclear source seen with previous X-ray observatories is resolved into five point sources. One of these sources is within 1 arc-sec of the M31 central super-massive black hole. As compared to the other point sources in M3l. this nuclear source has an unusually soft spectrum. Based on the spatial coincidence and the unusual spectrum. we identify this source with the central black hole. A bright transient is detected 26 arc-sec to the west of the nucleus, which may be associated with a stellar mass black hole. We will report on a comparison of the x-ray spectrum of the diffuse emission and point sources seen in the central few arcmin

Top