Sample records for source direction estimation

  1. Determination of the direction to a source of antineutrinos via inverse beta decay in Double Chooz

    NASA Astrophysics Data System (ADS)

    Nikitenko, Ya.

    2016-11-01

    To determine the direction to a source of neutrinos (and antineutrinos) is an important problem for the physics of supernovae and of the Earth. The direction to a source of antineutrinos can be estimated through the reaction of inverse beta decay. We show that the reactor neutrino experiment Double Chooz has unique capabilities to study antineutrino signal from point-like sources. Contemporary experimental data on antineutrino directionality is given. A rigorous mathematical approach for neutrino direction studies has been developed. Exact expressions for the precision of the simple mean estimator of neutrinos' direction for normal and exponential distributions for a finite sample and for the limiting case of many events have been obtained.

  2. Recent Approaches to Estimate Associations Between Source-Specific Air Pollution and Health.

    PubMed

    Krall, Jenna R; Strickland, Matthew J

    2017-03-01

    Estimating health effects associated with source-specific exposure is important for better understanding how pollution impacts health and for developing policies to better protect public health. Although epidemiologic studies of sources can be informative, these studies are challenging to conduct because source-specific exposures (e.g., particulate matter from vehicles) often are not directly observed and must be estimated. We reviewed recent studies that estimated associations between pollution sources and health to identify methodological developments designed to address important challenges. Notable advances in epidemiologic studies of sources include approaches for (1) propagating uncertainty in source estimation into health effect estimates, (2) assessing regional and seasonal variability in emissions sources and source-specific health effects, and (3) addressing potential confounding in estimated health effects. Novel methodological approaches to address challenges in studies of pollution sources, particularly evaluation of source-specific health effects, are important for determining how source-specific exposure impacts health.

  3. Hardwall acoustical characteristics and measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel

    NASA Technical Reports Server (NTRS)

    Rentz, P. E.

    1976-01-01

    Experimental evaluations of the acoustical characteristics and source sound power and directionality measurement capabilities of the NASA Lewis 9 x 15 foot low speed wind tunnel in the untreated or hardwall configuration were performed. The results indicate that source sound power estimates can be made using only settling chamber sound pressure measurements. The accuracy of these estimates, expressed as one standard deviation, can be improved from + or - 4 db to + or - 1 db if sound pressure measurements in the preparation room and diffuser are also used and source directivity information is utilized. A simple procedure is presented. Acceptably accurate measurements of source direct field acoustic radiation were found to be limited by the test section reverberant characteristics to 3.0 feet for omni-directional and highly directional sources. Wind-on noise measurements in the test section, settling chamber and preparation room were found to depend on the sixth power of tunnel velocity. The levels were compared with various analytic models. Results are presented and discussed.

  4. Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.

    PubMed

    Huang, Hong; Zhang, Baifa; Lu, Jun

    2014-01-01

    We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.

  5. Closed-Form 3-D Localization for Single Source in Uniform Circular Array with a Center Sensor

    NASA Astrophysics Data System (ADS)

    Bae, Eun-Hyon; Lee, Kyun-Kyung

    A novel closed-form algorithm is presented for estimating the 3-D location (azimuth angle, elevation angle, and range) of a single source in a uniform circular array (UCA) with a center sensor. Based on the centrosymmetry of the UCA and noncircularity of the source, the proposed algorithm decouples and estimates the 2-D direction of arrival (DOA), i.e. azimuth and elevation angles, and then estimates the range of the source. Notwithstanding a low computational complexity, the proposed algorithm provides an estimation performance close to that of the benchmark estimator 3-D MUSIC.

  6. A comparison of NEWS and SPARROW models to understand sources of nitrogen delivered to US coastal areas

    EPA Science Inventory

    The relative contributions of different anthropogenic and natural sources of in-stream nitrogen (N) cannot be directly measured at whole-watershed scales. Hence, source attribution estimates beyond the scale of small catchments must rely on models. Although such estimates have be...

  7. Using multiple composite fingerprints to quantify fine sediment source contributions: A new direction

    USDA-ARS?s Scientific Manuscript database

    Sediment source fingerprinting provides a vital means for estimating sediment source contributions, which are needed not only for soil conservation planning but also for erosion model evaluation. A single optimum composite fingerprint has been widely used in the literature to estimate sediment prov...

  8. Strong Ground Motion Simulation and Source Modeling of the April 1, 2006 Tai-Tung Earthquake Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2010-12-01

    The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.

  9. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.-C.; Lin, C.-Y.

    2012-04-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  10. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2012-12-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  11. Examining single-source secondary impacts estimated from brute-force, decoupled direct method, and advanced plume treatment approaches

    EPA Science Inventory

    In regulatory assessments, there is a need for reliable estimates of the impacts of precursor emissions from individual sources on secondary PM2.5 (particulate matter with aerodynamic diameter less than 2.5 microns) and ozone. Three potential methods for estimating th...

  12. Characterizing source-sink dynamics with genetic parentage assignments

    Treesearch

    M. Zachariah Peery; Steven R. Beissinger; Roger F. House; Martine Berube; Laurie A. Hall; Anna Sellas; Per J. Palsboll

    2008-01-01

    Source-sink dynamics have been suggested to characterize the population structure of many species, but the prevalence of source-sink systems in nature is uncertain because of inherent challenges in estimating migration rates among populations. Migration rates are often difficult to estimate directly with demographic methods, and indirect genetic methods are subject to...

  13. Estimating locations and total magnetization vectors of compact magnetic sources from scalar, vector, or tensor magnetic measurements through combined Helbig and Euler analysis

    USGS Publications Warehouse

    Phillips, J.D.; Nabighian, M.N.; Smith, D.V.; Li, Y.

    2007-01-01

    The Helbig method for estimating total magnetization directions of compact sources from magnetic vector components is extended so that tensor magnetic gradient components can be used instead. Depths of the compact sources can be estimated using the Euler equation, and their dipole moment magnitudes can be estimated using a least squares fit to the vector component or tensor gradient component data. ?? 2007 Society of Exploration Geophysicists.

  14. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  15. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  16. Flight parameter estimation using instantaneous frequency and direction of arrival measurements from a single acoustic sensor node.

    PubMed

    Lo, Kam W

    2017-03-01

    When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.

  17. Special Operations Forces Interagency Counterterrorism Reference Manual

    DTIC Science & Technology

    2009-03-01

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other...Presidential Review Directives and Presidential Decision Directives (Clin- ton administration) and National Security Study Directives and National

  18. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  19. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  20. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  1. Biquaternion beamspace with its application to vector-sensor array direction findings and polarization estimations

    NASA Astrophysics Data System (ADS)

    Li, Dan; Xu, Feng; Jiang, Jing Fei; Zhang, Jian Qiu

    2017-12-01

    In this paper, a biquaternion beamspace, constructed by projecting the original data of an electromagnetic vector-sensor array into a subspace of a lower dimension via a quaternion transformation matrix, is first proposed. To estimate the direction and polarization angles of sources, biquaternion beamspace multiple signal classification (BB-MUSIC) estimators are then formulated. The analytical results show that the biquaternion beamspaces offer us some additional degrees of freedom to simultaneously achieve three goals. One is to save the memory spaces for storing the data covariance matrix and reduce the computation efforts of the eigen-decomposition. Another is to decouple the estimations of the sources' polarization parameters from those of their direction angles. The other is to blindly whiten the coherent noise of the six constituent antennas in each vector-sensor. It is also shown that the existing biquaternion multiple signal classification (BQ-MUSIC) estimator is a specific case of our BB-MUSIC ones. The simulation results verify the correctness and effectiveness of the analytical ones.

  2. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    PubMed

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  3. 3-component beamforming analysis of ambient seismic noise field for Love and Rayleigh wave source directions

    NASA Astrophysics Data System (ADS)

    Juretzek, Carina; Hadziioannou, Céline

    2014-05-01

    Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.

  4. 3D magnetic sources' framework estimation using Genetic Algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Ponte-Neto, C. F.; Barbosa, V. C.

    2008-05-01

    We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate dipole-position coordinates coincident with principal axis of sources. In tests with synthetic data, simulating the magnetic anomaly yielded by intrusive 2D structures such as dikes and sills, the estimates of the dipole coordinates are coincident with the principal plane of these 2D sources. We also inverted the aeromagnetic data from Serra do Cabral, in southeastern, Brazil, and we estimated dipoles distributed on a horizontal plane at depth of 30 km, with inclination and declination of 59.1° and -48.0°, respectively. The results showed close agreement with previous interpretation.

  5. Impact of the galactic acceleration on the terrestrial reference frame and the scale factor in VLBI

    NASA Astrophysics Data System (ADS)

    Krásná, Hana; Titov, Oleg

    2017-04-01

    The relative motion of the solar system barycentre around the galactic centre can also be described as an acceleration of the solar system directed towards the centre of the Galaxy. So far, this effect has been omitted in the a priori modelling of the Very Long Baseline Interferometry (VLBI) observable. Therefore, it results in a systematic dipole proper motion (Secular Aberration Drift, SAD) of extragalactic radio sources building the celestial reference frame with a theoretical maximum magnitude of 5-7 microarcsec/year. In this work, we present our estimation of the SAD vector obtained within a global adjustment of the VLBI measurements (1979.0 - 2016.5) using the software VieVS. We focus on the influence of the observed radio sources with the maximum SAD effect on the terrestrial reference frame. We show that the scale factor from the VLBI measurements estimated for each source individually discloses a clear systematic aligned with the direction to the Galactic centre-anticentre. Therefore, the radio sources located near Galactic anticentre may cause a strong systematic effect, especially, in early VLBI years. For instance, radio source 0552+398 causes a difference up to 1 mm in the estimated baseline length. Furthermore, we discuss the scale factor estimated for each radio source after removal of the SAD systematic.

  6. Signal location using generalized linear constraints

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.; Feldman, D. D.

    1992-01-01

    This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.

  7. Evaluating measurements of carbon dioxide emissions using a precision source--A natural gas burner.

    PubMed

    Bryant, Rodney; Bundy, Matthew; Zong, Ruowen

    2015-07-01

    A natural gas burner has been used as a precise and accurate source for generating large quantities of carbon dioxide (CO2) to evaluate emissions measurements at near-industrial scale. Two methods for determining carbon dioxide emissions from stationary sources are considered here: predicting emissions based on fuel consumption measurements-predicted emissions measurements, and direct measurement of emissions quantities in the flue gas-direct emissions measurements. Uncertainty for the predicted emissions measurement was estimated at less than 1%. Uncertainty estimates for the direct emissions measurement of carbon dioxide were on the order of ±4%. The relative difference between the direct emissions measurements and the predicted emissions measurements was within the range of the measurement uncertainty, therefore demonstrating good agreement. The study demonstrates how independent methods are used to validate source emissions measurements, while also demonstrating how a fire research facility can be used as a precision test-bed to evaluate and improve carbon dioxide emissions measurements from stationary sources. Fossil-fuel-consuming stationary sources such as electric power plants and industrial facilities account for more than half of the CO2 emissions in the United States. Therefore, accurate emissions measurements from these sources are critical for evaluating efforts to reduce greenhouse gas emissions. This study demonstrates how a surrogate for a stationary source, a fire research facility, can be used to evaluate the accuracy of measurements of CO2 emissions.

  8. Can we estimate total magnetization directions from aeromagnetic data using Helbig's integrals?

    USGS Publications Warehouse

    Phillips, J.D.

    2005-01-01

    An algorithm that implements Helbig's (1963) integrals for estimating the vector components (mx, my, mz) of tile magnetic dipole moment from the first order moments of the vector magnetic field components (??X, ??Y, ??Z) is tested on real and synthetic data. After a grid of total field aeromagnetic data is converted to vector component grids using Fourier filtering, Helbig's infinite integrals are evaluated as finite integrals in small moving windows using a quadrature algorithm based on the 2-D trapezoidal rule. Prior to integration, best-fit planar surfaces must be removed from the component data within the data windows in order to make the results independent of the coordinate system origin. Two different approaches are described for interpreting the results of the integration. In the "direct" method, results from pairs of different window sizes are compared to identify grid nodes where the angular difference between solutions is small. These solutions provide valid estimates of total magnetization directions for compact sources such as spheres or dipoles, but not for horizontally elongated or 2-D sources. In the "indirect" method, which is more forgiving of source geometry, results of the quadrature analysis are scanned for solutions that are parallel to a specified total magnetization direction.

  9. Information-Driven Active Audio-Visual Source Localization

    PubMed Central

    Schult, Niclas; Reineking, Thomas; Kluss, Thorsten; Zetzsche, Christoph

    2015-01-01

    We present a system for sensorimotor audio-visual source localization on a mobile robot. We utilize a particle filter for the combination of audio-visual information and for the temporal integration of consecutive measurements. Although the system only measures the current direction of the source, the position of the source can be estimated because the robot is able to move and can therefore obtain measurements from different directions. These actions by the robot successively reduce uncertainty about the source’s position. An information gain mechanism is used for selecting the most informative actions in order to minimize the number of actions required to achieve accurate and precise position estimates in azimuth and distance. We show that this mechanism is an efficient solution to the action selection problem for source localization, and that it is able to produce precise position estimates despite simplified unisensory preprocessing. Because of the robot’s mobility, this approach is suitable for use in complex and cluttered environments. We present qualitative and quantitative results of the system’s performance and discuss possible areas of application. PMID:26327619

  10. Direction of arrival estimation using blind separation of sources

    NASA Astrophysics Data System (ADS)

    Hirari, Mehrez; Hayakawa, Masashi

    1999-05-01

    The estimation of direction of arrival (DOA) and polarization of an incident electromagnetic (EM) wave is of great importance in many applications. In this paper we propose a new approach for the estimation of DOA for polarized EM waves using blind separation of sources. In this approach we use a vector sensor, a sensor whose output is a complete set of the EM field components of the irradiating wave, and we reconstruct the waveforms of all the original signals that is, all the EM components of the sources' fields. From the waveform of each source we calculate its amplitude and phase and consequently calculate its DOA and polarization using the field analysis method. The separation of sources is conducted iteratively using a recurrent Hopfield-like single-layer neural network. The simulation results for two sources have been investigated. We have considered coherent and incoherent sources and also the case of varying DOAs vis-ā-vis the sensor and a varying polarization. These are cases seldom treated by other approaches even though they exist in real-world applications. With the proposed method we have obtained almost on-time tracking for the DOA and polarization of any incident sources with a significant reduction of both memory and computation costs.

  11. Direct and Indirect Measurements and Modeling of Methane Emissions in Indianapolis, Indiana.

    PubMed

    Lamb, Brian K; Cambaliza, Maria O L; Davis, Kenneth J; Edburg, Steven L; Ferrara, Thomas W; Floerchinger, Cody; Heimburger, Alexie M F; Herndon, Scott; Lauvaux, Thomas; Lavoie, Tegan; Lyon, David R; Miles, Natasha; Prasad, Kuldeep R; Richardson, Scott; Roscioli, Joseph Robert; Salmon, Olivia E; Shepson, Paul B; Stirm, Brian H; Whetstone, James

    2016-08-16

    This paper describes process-based estimation of CH4 emissions from sources in Indianapolis, IN and compares these with atmospheric inferences of whole city emissions. Emissions from the natural gas distribution system were estimated from measurements at metering and regulating stations and from pipeline leaks. Tracer methods and inverse plume modeling were used to estimate emissions from the major landfill and wastewater treatment plant. These direct source measurements informed the compilation of a methane emission inventory for the city equal to 29 Gg/yr (5% to 95% confidence limits, 15 to 54 Gg/yr). Emission estimates for the whole city based on an aircraft mass balance method and from inverse modeling of CH4 tower observations were 41 ± 12 Gg/yr and 81 ± 11 Gg/yr, respectively. Footprint modeling using 11 days of ethane/methane tower data indicated that landfills, wastewater treatment, wetlands, and other biological sources contribute 48% while natural gas usage and other fossil fuel sources contribute 52% of the city total. With the biogenic CH4 emissions omitted, the top-down estimates are 3.5-6.9 times the nonbiogenic city inventory. Mobile mapping of CH4 concentrations showed low level enhancement of CH4 throughout the city reflecting diffuse natural gas leakage and downstream usage as possible sources for the missing residual in the inventory.

  12. Acoustic source localization in mixed field using spherical microphone arrays

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Wang, Tong

    2014-12-01

    Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.

  13. Joint Estimation of Source Range and Depth Using a Bottom-Deployed Vertical Line Array in Deep Water

    PubMed Central

    Li, Hui; Yang, Kunde; Duan, Rui; Lei, Zhixiong

    2017-01-01

    This paper presents a joint estimation method of source range and depth using a bottom-deployed vertical line array (VLA). The method utilizes the information on the arrival angle of direct (D) path in space domain and the interference characteristic of D and surface-reflected (SR) paths in frequency domain. The former is related to a ray tracing technique to backpropagate the rays and produces an ambiguity surface of source range. The latter utilizes Lloyd’s mirror principle to obtain an ambiguity surface of source depth. The acoustic transmission duct is the well-known reliable acoustic path (RAP). The ambiguity surface of the combined estimation is a dimensionless ad hoc function. Numerical efficiency and experimental verification show that the proposed method is a good candidate for initial coarse estimation of source position. PMID:28590442

  14. Getting Astrophysical Information from LISA Data

    NASA Technical Reports Server (NTRS)

    Stebbins, R. T.; Bender, P. L.; Folkner, W. M.

    1997-01-01

    Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.

  15. Off-Grid Direction of Arrival Estimation Based on Joint Spatial Sparsity for Distributed Sparse Linear Arrays

    PubMed Central

    Liang, Yujie; Ying, Rendong; Lu, Zhenqi; Liu, Peilin

    2014-01-01

    In the design phase of sensor arrays during array signal processing, the estimation performance and system cost are largely determined by array aperture size. In this article, we address the problem of joint direction-of-arrival (DOA) estimation with distributed sparse linear arrays (SLAs) and propose an off-grid synchronous approach based on distributed compressed sensing to obtain larger array aperture. We focus on the complex source distribution in the practical applications and classify the sources into common and innovation parts according to whether a signal of source can impinge on all the SLAs or a specific one. For each SLA, we construct a corresponding virtual uniform linear array (ULA) to create the relationship of random linear map between the signals respectively observed by these two arrays. The signal ensembles including the common/innovation sources for different SLAs are abstracted as a joint spatial sparsity model. And we use the minimization of concatenated atomic norm via semidefinite programming to solve the problem of joint DOA estimation. Joint calculation of the signals observed by all the SLAs exploits their redundancy caused by the common sources and decreases the requirement of array size. The numerical results illustrate the advantages of the proposed approach. PMID:25420150

  16. Estimation of diffuse and point source microbial pollution in the ribble catchment discharging to bathing waters in the north west of England.

    PubMed

    Wither, A; Greaves, J; Dunhill, I; Wyer, M; Stapleton, C; Kay, D; Humphrey, N; Watkins, J; Francis, C; McDonald, A; Crowther, J

    2005-01-01

    Achieving compliance with the mandatory standards of the 1976 Bathing Water Directive (76/160/EEC) is required at all U.K. identified bathing waters. In recent years, the Fylde coast has been an area of significant investments in 'point source' control, which have not proven, in isolation, to satisfactorily achieve compliance with the mandatory, let alone the guide, levels of water quality in the Directive. The potential impact of riverine sources of pollution was first confirmed after a study in 1997. The completion of sewerage system enhancements offered the potential for the study of faecal indicator delivery from upstream sources comprising both point sources and diffuse agricultural sources. A research project to define these elements commenced in 2001. Initially, a desk study reported here, estimated the principal infrastructure contributions within the Ribble catchment. A second phase of this investigation has involved acquisition of empirical water quality and hydrological data from the catchment during the 2002 bathing season. These data have been used further to calibrate the 'budgets' and 'delivery' modelling and these data are still being analysed. This paper reports the initial desk study approach to faecal indicator budget estimation using available data from the sewerage infrastructure and catchment sources of faecal indicators.

  17. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  18. Two-Dimensional DOA and Polarization Estimation for a Mixture of Uncorrelated and Coherent Sources with Sparsely-Distributed Vector Sensor Array

    PubMed Central

    Si, Weijian; Zhao, Pinjiao; Qu, Zhiyu

    2016-01-01

    This paper presents an L-shaped sparsely-distributed vector sensor (SD-VS) array with four different antenna compositions. With the proposed SD-VS array, a novel two-dimensional (2-D) direction of arrival (DOA) and polarization estimation method is proposed to handle the scenario where uncorrelated and coherent sources coexist. The uncorrelated and coherent sources are separated based on the moduli of the eigenvalues. For the uncorrelated sources, coarse estimates are acquired by extracting the DOA information embedded in the steering vectors from estimated array response matrix of the uncorrelated sources, and they serve as coarse references to disambiguate fine estimates with cyclical ambiguity obtained from the spatial phase factors. For the coherent sources, four Hankel matrices are constructed, with which the coherent sources are resolved in a similar way as for the uncorrelated sources. The proposed SD-VS array requires only two collocated antennas for each vector sensor, thus the mutual coupling effects across the collocated antennas are reduced greatly. Moreover, the inter-sensor spacings are allowed beyond a half-wavelength, which results in an extended array aperture. Simulation results demonstrate the effectiveness and favorable performance of the proposed method. PMID:27258271

  19. State energy price and expenditure report 1994

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1997-06-01

    The State Energy Price and Expenditure Report (SEPER) presents energy price and expenditure estimates individually for the 50 States and the District of Columbia and in aggregate for the United States. The price and expenditure estimates developed in the State Energy Price and Expenditure Data System (SEPEDS) are provided by energy source and economic sector and are published for the years 1970 through 1994. Consumption estimates used to calculate expenditures and the documentation for those estimates are taken from the State Energy Data Report 1994, Consumption Estimates (SEDR), published in October 1996. Expenditures are calculated by multiplying the price estimatesmore » by the consumption estimates, which are adjusted to remove process fuel; intermediate petroleum products; and other consumption that has no direct fuel costs, i.e., hydroelectric, geothermal, wind, solar, and photovoltaic energy sources. Documentation is included describing the development of price estimates, data sources, and calculation methods. 316 tabs.« less

  20. A Bayesian Multivariate Receptor Model for Estimating Source Contributions to Particulate Matter Pollution using National Databases.

    PubMed

    Hackstadt, Amber J; Peng, Roger D

    2014-11-01

    Time series studies have suggested that air pollution can negatively impact health. These studies have typically focused on the total mass of fine particulate matter air pollution or the individual chemical constituents that contribute to it, and not source-specific contributions to air pollution. Source-specific contribution estimates are useful from a regulatory standpoint by allowing regulators to focus limited resources on reducing emissions from sources that are major contributors to air pollution and are also desired when estimating source-specific health effects. However, researchers often lack direct observations of the emissions at the source level. We propose a Bayesian multivariate receptor model to infer information about source contributions from ambient air pollution measurements. The proposed model incorporates information from national databases containing data on both the composition of source emissions and the amount of emissions from known sources of air pollution. The proposed model is used to perform source apportionment analyses for two distinct locations in the United States (Boston, Massachusetts and Phoenix, Arizona). Our results mirror previous source apportionment analyses that did not utilize the information from national databases and provide additional information about uncertainty that is relevant to the estimation of health effects.

  1. A novel pathway of direct methane production and emission by eukaryotes including plants, animals and fungi: An overview

    NASA Astrophysics Data System (ADS)

    Liu, Jiangong; Chen, Huai; Zhu, Qiuan; Shen, Yan; Wang, Xue; Wang, Meng; Peng, Changhui

    2015-08-01

    Methane (CH4) is a powerful greenhouse gas with a global warming potential 28 times that of carbon dioxide (CO2). CH4 is responsible for approximately 20% of the Earth's warming since pre-industrial times. Knowledge of the sources of CH4 is crucial due to the recent substantial interannual variability of growth rates and uncertainties regarding individual sources. The prevailing paradigm is that methanogenesis carried out by methanogenic archaea occurs primarily under strictly anaerobic conditions. However, in the past decade, studies have confirmed direct CH4 release from three important kingdoms of eukaryotes-Plantae, Animalia and Fungi-even in the presence of oxygen. This novel CH4 production pathway has been aptly termed ;aerobic CH4 production; to distinguish it from the well-known anaerobic CH4 production pathway, which involves catalytic activity by methanogenic archaeal enzymes. In this review, we collated recent experimental evidence from the published literature and documented this novel pathway of direct CH4 production and emission by eukaryotes. The mechanisms involved in this pathway may be related to protective strategies of eukaryotes in response to changing environmental stresses, with CH4 a by-product or end-product during or at the end of the process(es) that originates from organic methyl-type compounds. Based on the existing, albeit uncertain estimates, plants seem to contribute less to the global CH4 budget (3-24%) compared to previous estimates (10-37%). We still lack estimates of CH4 emissions by animals and fungi. Overall, there is an urgent need to identify the precursors for this novel CH4 source and improve our understanding of the mechanisms of direct CH4 production and the impacts of environmental stresses. An estimate of this new CH4 source, which was not considered as a CH4 source by the Intergovernmental Panel on Climate Change (IPCC) (2013), could be useful for better quantitation of the global CH4 budget.

  2. Direct Thermodynamic Measurements of the Energetics of Information Processing

    DTIC Science & Technology

    2017-08-08

    Report: Direct thermodynamic measurements of the energetics of information processing The views, opinions and/or findings contained in this report are... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other

  3. Advanced Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner-Ville Distribution, and Nonstationary Signal Representation

    DTIC Science & Technology

    1991-12-01

    TRANSFORM, WIGNER - VILLE DISTRIBUTION , AND NONSTATIONARY SIGNAL REPRESENTATIONS 6. AUTHOR(S) J. C. Allen 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS...bispectrum yields a bispectral direction finder. Estimates of time-frequency distributions produce Wigner - Ville and Gabor direction-finders. Some types...Beamforming Concepts: Source Localization Using the Bispectrum, Gabor Transform, Wigner - Ville Distribution , and Nonstationary Signal Representations

  4. [Perception by teenagers and adults of the changed by amplitude sound sequences used in models of movement of the sound source].

    PubMed

    Andreeva, I G; Vartanian, I A

    2012-01-01

    The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.

  5. A portable inspection system to estimate direct glare of various LED modules

    NASA Astrophysics Data System (ADS)

    Chen, Po-Li; Liao, Chun-Hsiang; Li, Hung-Chung; Jou, Shyh-Jye; Chen, Han-Ting; Lin, Yu-Hsin; Tang, Yu-Hsiang; Peng, Wei-Jei; Kuo, Hui-Jean; Sun, Pei-Li; Lee, Tsung-Xian

    2015-07-01

    Glare is caused by both direct and indirect light sources and discomfort glare produces visual discomfort, annoyance, or loss in visual performance and visibility. Direct glare is caused by light sources in the field of view whereas reflected glare is caused by bright reflections from polished or glossy surfaces that are reflected toward an individual. To improve visual comfort of our living environment, a portable inspection system to estimate direct glare of various commercial LED modules with the range of color temperature from 3100 K to 5300 K was developed in this study. The system utilized HDR images to obtain the illumination distribution of LED modules and was first calibrated for brightness and chromaticity and corrected with flat field, dark-corner and curvature by the installed algorithm. The index of direct glare was then automatically estimated after image capturing, and the operator can recognize the performance of LED modules and the possible effects on human being once the index was out of expecting range. In the future, we expect that the quick-response smart inspection system can be applied in several new fields and market, such as home energy diagnostics, environmental lighting and UGR monitoring and popularize it in several new fields.

  6. A Doppler centroid estimation algorithm for SAR systems optimized for the quasi-homogeneous source

    NASA Technical Reports Server (NTRS)

    Jin, Michael Y.

    1989-01-01

    Radar signal processing applications frequently require an estimate of the Doppler centroid of a received signal. The Doppler centroid estimate is required for synthetic aperture radar (SAR) processing. It is also required for some applications involving target motion estimation and antenna pointing direction estimation. In some cases, the Doppler centroid can be accurately estimated based on available information regarding the terrain topography, the relative motion between the sensor and the terrain, and the antenna pointing direction. Often, the accuracy of the Doppler centroid estimate can be improved by analyzing the characteristics of the received SAR signal. This kind of signal processing is also referred to as clutterlock processing. A Doppler centroid estimation (DCE) algorithm is described which contains a linear estimator optimized for the type of terrain surface that can be modeled by a quasi-homogeneous source (QHS). Information on the following topics is presented: (1) an introduction to the theory of Doppler centroid estimation; (2) analysis of the performance characteristics of previously reported DCE algorithms; (3) comparison of these analysis results with experimental results; (4) a description and performance analysis of a Doppler centroid estimator which is optimized for a QHS; and (5) comparison of the performance of the optimal QHS Doppler centroid estimator with that of previously reported methods.

  7. Deconvolution enhanced direction of arrival estimation using one- and three-component seismic arrays applied to ocean induced microseisms

    NASA Astrophysics Data System (ADS)

    Gal, M.; Reading, A. M.; Ellingsen, S. P.; Koper, K. D.; Burlacu, R.; Gibbons, S. J.

    2016-07-01

    Microseisms in the period of 2-10 s are generated in deep oceans and near coastal regions. It is common for microseisms from multiple sources to arrive at the same time at a given seismometer. It is therefore desirable to be able to measure multiple slowness vectors accurately. Popular ways to estimate the direction of arrival of ocean induced microseisms are the conventional (fk) or adaptive (Capon) beamformer. These techniques give robust estimates, but are limited in their resolution capabilities and hence do not always detect all arrivals. One of the limiting factors in determining direction of arrival with seismic arrays is the array response, which can strongly influence the estimation of weaker sources. In this work, we aim to improve the resolution for weaker sources and evaluate the performance of two deconvolution algorithms, Richardson-Lucy deconvolution and a new implementation of CLEAN-PSF. The algorithms are tested with three arrays of different aperture (ASAR, WRA and NORSAR) using 1 month of real data each and compared with the conventional approaches. We find an improvement over conventional methods from both algorithms and the best performance with CLEAN-PSF. We then extend the CLEAN-PSF framework to three components (3C) and evaluate 1 yr of data from the Pilbara Seismic Array in northwest Australia. The 3C CLEAN-PSF analysis is capable in resolving a previously undetected Sn phase.

  8. Assessment of source-specific health effects associated with an unknown number of major sources of multiple air pollutants: a unified Bayesian approach.

    PubMed

    Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H

    2014-07-01

    There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  10. No-search algorithm for direction of arrival estimation

    NASA Astrophysics Data System (ADS)

    Tuncer, T. Engin; Ã-Zgen, M. Tankut

    2009-10-01

    Direction of arrival estimation (DOA) is an important problem in ionospheric research and electromagnetics as well as many other fields. When superresolution techniques are used, a computationally expensive search should be performed in general. In this paper, a no-search algorithm is presented. The idea is to separate the source signals in the time-frequency plane by using the Short-Time Fourier Transform. The direction vector for each source is found by coherent summation over the instantaneous frequency (IF) tracks of the individual sources which are found automatically by employing morphological image processing. Both overlapping and nonoverlapping source IF tracks can be processed and identified by the proposed approach. The CLEAN algorithm is adopted in order to isolate the IF tracks of the overlapping sources with different powers. The proposed method is very effective in finding the IF tracks and can be applied for signals with arbitrary IF characteristics. While the proposed method can be applied to any sensor geometry, planar uniform circular arrays (UCA) bring additional advantages. Different properties of the UCA are presented, and it is shown that the DOA angles can be found as the mean-square error optimum solution of a linear matrix equation. Several simulations are done, and it is shown that the proposed approach performs significantly better than the conventional methods.

  11. Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity

    NASA Astrophysics Data System (ADS)

    Lin, Y. Y.; Lapusta, N.

    2017-12-01

    Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.

  12. Two-dimensional grid-free compressive beamforming.

    PubMed

    Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli

    2017-08-01

    Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.

  13. Study of the Seismic Source in the Jalisco Block

    NASA Astrophysics Data System (ADS)

    Gutierrez, Q. J.; Escudero, C. R.; Nunez-Cornu, F. J.; Ochoa, J.; Cruz, L. H.

    2013-05-01

    The direct measure of the earthquake fault dimension and the orientation, as well as the direction of slip represent a complicated task nevertheless a better approach is using the seismic waves spectrum and the direction of P-first motions observed at each station. With these methods we can estimate the seismic source parameters like the stress drop, the corner frequency which is linked to the rupture duration time, the fault radius (For the particular case of a circular fault), the rupture area, the seismic moment , the moment magnitude and the focal mechanisms. The study area where were estimated the source parameters comprises the complex tectonic configuration of Jalisco block, that is delimited by the mesoamerican trench at the west, the Colima grabben to the south, and the Tepic-Zacoalco to the north The data was recorded by the MARS network (Mapping the Riviera Subduction Zone) and the RESAJ network. MARS had 50 stations and settled in the Jalisco block; for a period of time, of January 1, 2006 until June, 2007, the magnitude range of these was between 3 to 6.5 MB. RESJAL has 10 stations and is within the state of Jalisco, began to record since October 2011 and continues to record. Before of apply the method we firs remove the trend, the mean and the instrument response and we corrected for attenuation; then manually chosen the S wave, the multitaper method was used to obtain the spectrum of this wave and so estimate the corner frequency and the spectra level. We substitute the obtained in the equations of the Brune model to calculate the source parameters. To calculate focal mechanisms HASH software was used which determines the most likely mechanism. The main propose of this study is estimate earthquake seismic source parameters with the objective of that helps to understand the physics of earthquake rupture mechanism in the area.

  14. Trial Results of Ship Motions and Their Influence on Aircraft Operations for ISCS GUAM

    DTIC Science & Technology

    1975-12-01

    vide an estimate of the relative frequency and thus impcrtance of ship motions as a source of Harrier operation cancellations. It may be seen that of the...example. If wind speed is considered to be the only source of restrictions in aircraft operations, estimates of the maximum total number of operational days...poem’rieedad to c€a etiate (mneuver) for various components *A complete list of refrences is given on Page 104. 10 of ship motion Is directly related

  15. Wide-band array signal processing via spectral smoothing

    NASA Technical Reports Server (NTRS)

    Xu, Guanghan; Kailath, Thomas

    1989-01-01

    A novel algorithm for the estimation of direction-of-arrivals (DOA) of multiple wide-band sources via spectral smoothing is presented. The proposed algorithm does not require an initial DOA estimate or a specific signal model. The advantages of replacing the MUSIC search with an ESPRIT search are discussed.

  16. 2-D or not 2-D, that is the question: A Northern California test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K; Malagnini, L; Phillips, W S

    2005-06-06

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regions ofmore » approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. The complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Using the same station and event distribution, we compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7{le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  17. A process-based emission model for volatile organic compounds from silage sources on farms

    USDA-ARS?s Scientific Manuscript database

    Silage on dairy farms can emit large amounts of volatile organic compounds (VOCs), a precursor in the formation of tropospheric ozone. Because of the challenges associated with direct measurements, process-based modeling is another approach for estimating emissions of air pollutants from sources suc...

  18. Kalman Filters for Time Delay of Arrival-Based Source Localization

    NASA Astrophysics Data System (ADS)

    Klee, Ulrich; Gehrig, Tobias; McDonough, John

    2006-12-01

    In this work, we propose an algorithm for acoustic source localization based on time delay of arrival (TDOA) estimation. In earlier work by other authors, an initial closed-form approximation was first used to estimate the true position of the speaker followed by a Kalman filtering stage to smooth the time series of estimates. In the proposed algorithm, this closed-form approximation is eliminated by employing a Kalman filter to directly update the speaker's position estimate based on the observed TDOAs. In particular, the TDOAs comprise the observation associated with an extended Kalman filter whose state corresponds to the speaker's position. We tested our algorithm on a data set consisting of seminars held by actual speakers. Our experiments revealed that the proposed algorithm provides source localization accuracy superior to the standard spherical and linear intersection techniques. Moreover, the proposed algorithm, although relying on an iterative optimization scheme, proved efficient enough for real-time operation.

  19. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors

    PubMed Central

    Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin

    2011-01-01

    Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875

  20. Fast Noncircular 2D-DOA Estimation for Rectangular Planar Array

    PubMed Central

    Xu, Lingyun; Wen, Fangqing

    2017-01-01

    A novel scheme is proposed for direction finding with uniform rectangular planar array. First, the characteristics of noncircular signals and Euler’s formula are exploited to construct a new real-valued rectangular array data. Then, the rotational invariance relations for real-valued signal space are depicted in a new way. Finally the real-valued propagator method is utilized to estimate the pairing two-dimensional direction of arrival (2D-DOA). The proposed algorithm provides better angle estimation performance and can discern more sources than the 2D propagator method. At the same time, it has very close angle estimation performance to the noncircular propagator method (NC-PM) with reduced computational complexity. PMID:28417926

  1. Directly comparing gravitational wave data to numerical relativity simulations: systematics

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Zlochower, Yosef; Shoemaker, Deirdre; Lovelace, Geoffrey; Pankow, Christopher; Brady, Patrick; Scheel, Mark; Pfeiffer, Harald; Ossokine, Serguei

    2017-01-01

    We compare synthetic data directly to complete numerical relativity simulations of binary black holes. In doing so, we circumvent ad-hoc approximations introduced in semi-analytical models previously used in gravitational wave parameter estimation and compare the data against the most accurate waveforms including higher modes. In this talk, we focus on the synthetic studies that test potential sources of systematic errors. We also run ``end-to-end'' studies of intrinsically different synthetic sources to show we can recover parameters for different systems.

  2. Gain-loss study of lower San Pedro Creek and the San Antonio River, San Antonio, Texas, May-October 1999

    USGS Publications Warehouse

    Ockerman, Darwin J.

    2002-01-01

    Five streamflow gain-loss measurement surveys were made along lower San Pedro Creek and the San Antonio River from Mitchell Street to South Loop 410 east of Kelly Air Force Base in San Antonio, Texas, during May–October 1999. All of the measurements were made during dry periods, when stormwater runoff was not occurring and effects of possible bank storage were minimized. San Pedro Creek and the San Antonio River were divided into six subreaches, and streamflow measurements were made simultaneously at the boundaries of these subreaches so that streamflow gains or losses and estimates of inflow from or outflow to shallow ground water could be quantified for each subreach. There are two possible sources of ground-water inflow to lower San Pedro Creek and the San Antonio River east of Kelly Air Force Base. One source is direct inflow of shallow ground water into the streams. The other source is ground water that enters tributaries that flow into the San Antonio River. The estimated mean direct inflow of ground water to the combined San Pedro Creek and San Antonio River study reach was 3.0 cubic feet per second or 1.9 million gallons per day. The mean tributary inflow of ground water was estimated to be 1.9 cubic feet per second or 1.2 million gallons per day. The total estimated inflow of shallow ground water was 4.9 cubic feet per second or 3.2 million gallons per day. The amount of inflow from springs and seeps (estimated by observation) is much less than the amount of direct ground-water inflow estimated from the gain-loss measurements. Therefore, the presence of springs and seeps might not be a reliable indicator of the source of shallow ground water entering the river. Most of the shallow ground water that enters the San Antonio River from tributary inflow enters from the west side, through Concepcion Creek, inflows near Riverside Golf Course, and Six-Mile Creek. 

  3. A Direct Position-Determination Approach for Multiple Sources Based on Neural Network Computation.

    PubMed

    Chen, Xin; Wang, Ding; Yin, Jiexin; Wu, Ying

    2018-06-13

    The most widely used localization technology is the two-step method that localizes transmitters by measuring one or more specified positioning parameters. Direct position determination (DPD) is a promising technique that directly localizes transmitters from sensor outputs and can offer superior localization performance. However, existing DPD algorithms such as maximum likelihood (ML)-based and multiple signal classification (MUSIC)-based estimations are computationally expensive, making it difficult to satisfy real-time demands. To solve this problem, we propose the use of a modular neural network for multiple-source DPD. In this method, the area of interest is divided into multiple sub-areas. Multilayer perceptron (MLP) neural networks are employed to detect the presence of a source in a sub-area and filter sources in other sub-areas, and radial basis function (RBF) neural networks are utilized for position estimation. Simulation results show that a number of appropriately trained neural networks can be successfully used for DPD. The performance of the proposed MLP-MLP-RBF method is comparable to the performance of the conventional MUSIC-based DPD algorithm for various signal-to-noise ratios and signal power ratios. Furthermore, the MLP-MLP-RBF network is less computationally intensive than the classical DPD algorithm and is therefore an attractive choice for real-time applications.

  4. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar.

    PubMed

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-04-14

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method.

  5. User's guide for RAM. Volume II. Data preparation and listings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, D.B.; Novak, J.H.

    1978-11-01

    The information presented in this user's guide is directed to air pollution scientists having an interest in applying air quality simulation models. RAM is a method of estimating short-term dispersion using the Gaussian steady-state model. These algorithms can be used for estimating air quality concentrations of relatively nonreactive pollutants for averaging times from an hour to a day from point and area sources. The algorithms are applicable for locations with level or gently rolling terrain where a single wind vector for each hour is a good approximation to the flow over the source area considered. Calculations are performed for eachmore » hour. Hourly meteorological data required are wind direction, wind speed, temperature, stability class, and mixing height. Emission information required of point sources consists of source coordinates, emission rate, physical height, stack diameter, stack gas exit velocity, and stack gas temperature. Emission information required of area sources consists of southwest corner coordinates, source side length, total area emission rate and effective area source-height. Computation time is kept to a minimum by the manner in which concentrations from area sources are estimated using a narrow plume hypothesis and using the area source squares as given rather than breaking down all sources into an area of uniform elements. Options are available to the user to allow use of three different types of receptor locations: (1) those whose coordinates are input by the user, (2) those whose coordinates are determined by the model and are downwind of significant point and area sources where maxima are likely to occur, and (3) those whose coordinates are determined by the model to give good area coverage of a specific portion of the region. Computation time is also decreased by keeping the number of receptors to a minimum. Volume II presents RAM example outputs, typical run streams, variable glossaries, and Fortran source codes.« less

  6. Source localization in an ocean waveguide using supervised machine learning.

    PubMed

    Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter

    2017-09-01

    Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.

  7. SOURCE SAMPLING FINE PARTICULATE MATTER--INSTITUTIONAL OIL-FIRED BOILER

    EPA Science Inventory

    EPA seeks to understand the correlation between ambient fine PM and adverse human health effects, and there are no reliable emission factors to use for estimating PM2.5 or NH3. The most common source of directly emitted PM2.5 is incomplete combustion of fossil or biomass fuels. M...

  8. Adaptive near-field beamforming techniques for sound source imaging.

    PubMed

    Cho, Yong Thung; Roan, Michael J

    2009-02-01

    Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.

  9. Assessing Model Characterization of Single Source ...

    EPA Pesticide Factsheets

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  10. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  11. Source spectral variation and yield estimation for small, near-source explosions

    NASA Astrophysics Data System (ADS)

    Yoo, S.; Mayeda, K. M.

    2012-12-01

    Significant S-wave generation is always observed from explosion sources which can lead to difficulty in discriminating explosions from natural earthquakes. While there are numerous S-wave generation mechanisms that are currently the topic of significant research, the mechanisms all remain controversial and appear to be dependent upon the near-source emplacement conditions of that particular explosion. To better understand the generation and partitioning of the P and S waves from explosion sources and to enhance the identification and discrimination capability of explosions, we investigate near-source explosion data sets from the 2008 New England Damage Experiment (NEDE), the Humble-Redwood (HR) series of explosions, and a Massachusetts quarry explosion experiment. We estimate source spectra and characteristic source parameters using moment tensor inversions, direct P and S waves multi-taper analysis, and improved coda spectral analysis using high quality waveform records from explosions from a variety of emplacement conditions (e.g., slow/fast burning explosive, fully tamped, partially tamped, single/ripple-fired, and below/above ground explosions). The results from direct and coda waves are compared to theoretical explosion source model predictions. These well-instrumented experiments provide us with excellent data from which to document the characteristic spectral shape, relative partitioning between P and S-waves, and amplitude/yield dependence as a function of HOB/DOB. The final goal of this study is to populate a comprehensive seismic source reference database for small yield explosions based on the results and to improve nuclear explosion monitoring capability.

  12. Economic impact of cystic echinococcosis in peru.

    PubMed

    Moro, Pedro L; Budke, Christine M; Schantz, Peter M; Vasquez, Julio; Santivañez, Saul J; Villavicencio, Jaime

    2011-05-01

    Cystic echinococcosis (CE) constitutes an important public health problem in Peru. However, no studies have attempted to estimate the monetary and non-monetary impact of CE in Peruvian society. We used official and published sources of epidemiological and economic information to estimate direct and indirect costs associated with livestock production losses and human disease in addition to surgical CE-associated disability adjusted life years (DALYs) lost. The total estimated cost of human CE in Peru was U.S.$2,420,348 (95% CI:1,118,384-4,812,722) per year. Total estimated livestock-associated costs due to CE ranged from U.S.$196,681 (95% CI:141,641-251,629) if only direct losses (i.e., cattle and sheep liver destruction) were taken into consideration to U.S.$3,846,754 (95% CI:2,676,181-4,911,383) if additional production losses (liver condemnation, decreased carcass weight, wool losses, decreased milk production) were accounted for. An estimated 1,139 (95% CI: 861-1,489) DALYs were also lost due to surgical cases of CE. This preliminary and conservative assessment of the socio-economic impact of CE on Peru, which is based largely on official sources of information, very likely underestimates the true extent of the problem. Nevertheless, these estimates illustrate the negative economic impact of CE in Peru.

  13. The Height of a White-Light Flare and its Hard X-Ray Sources

    NASA Technical Reports Server (NTRS)

    Oliveros, Juan-Carlos Martinez; Hudson, Hugh S.; Hurford, Gordon J.; Kriucker, Saem; Lin, R. P.; Lindsey, Charles; Couvidat, Sebastien; Schou, Jesper; Thompson, W. T.

    2012-01-01

    We describe observations of a white-light (WL) flare (SOL2011-02-24T07:35:00, M3.5) close to the limb of the Sun, from which we obtain estimates of the heights of the optical continuum sources and those of the associated hard X-ray (HXR) sources. For this purpose, we use HXR images from the Reuven Ramaty High Energy Spectroscopic Imager and optical images at 6173 Ang. from the Solar Dynamics Observatory.We find that the centroids of the impulsive-phase emissions in WL and HXRs (30 -80 keV) match closely in central distance (angular displacement from Sun center), within uncertainties of order 0".2. This directly implies a common source height for these radiations, strengthening the connection between visible flare continuum formation and the accelerated electrons. We also estimate the absolute heights of these emissions as vertical distances from Sun center. Such a direct estimation has not been done previously, to our knowledge. Using a simultaneous 195 Ang. image from the Solar-Terrestrial RElations Observatory spacecraft to identify the heliographic coordinates of the flare footpoints, we determine mean heights above the photosphere (as normally defined; tau = 1 at 5000 Ang.) of 305 +/- 170 km and 195 +/- 70 km, respectively, for the centroids of the HXR and WL footpoint sources of the flare. These heights are unexpectedly low in the atmosphere, and are consistent with the expected locations of tau = 1 for the 6173 Ang and the approx 40 keV photons observed, respectively.

  14. Adaptively Reevaluated Bayesian Localization (ARBL). A Novel Technique for Radiological Source Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.

    2015-01-19

    Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less

  15. Source apportionment of PM(2.5) in the harbour-industrial area of Brindisi (Italy): identification and estimation of the contribution of in-port ship emissions.

    PubMed

    Cesari, D; Genga, A; Ielpo, P; Siciliano, M; Mascolo, G; Grasso, F M; Contini, D

    2014-11-01

    Harbours are important for economic and social development of coastal areas but they also represent an anthropogenic source of emissions often located near urban centres and industrial areas. This increases the difficulties in distinguishing the harbour contribution with respect to other sources. The aim of this work is the characterisation of main sources of PM2.5 acting on the Brindisi harbour-industrial area, trying to pinpoint the contribution of in-port ship emissions to primary and secondary PM2.5. Brindisi is an important port-city of the Adriatic Sea considered a hot-spot for anthropogenic environmental pressures at National level. Measurements were performed collecting PM2.5 samples and characterising the concentrations of 23 chemical species (water soluble organic and inorganic carbon; major ions: SO4(2-), NO3(-), NH4(+), Cl(-), C2O4(2-), Na(+), K(+), Mg(2+), Ca(2+); and elements: Ni, Cu, V, Mn, As, Pb, Cr, Sb, Fe, Al, Zn, and Ti). These species represent, on average, 51.4% of PM2.5 and were used for source apportionment via PMF. The contributions of eight sources were estimated: crustal (16.4±0.9% of PM2.5), aged marine (2.6±0.5%), crustal carbonates (7.7±0.3%), ammonium sulphate (27.3±0.8%), biomass burning-fires (11.7±0.7%), traffic (16.4±1.7 %), industrial (0.4±0.3%) and a mixed source oil combustion-industrial including ship emissions in harbour (15.3±1.3%). The PMF did not separate the in-port ship emission contribution from industrial releases. The correlation of estimated contribution with meteorology showed directionality with an increase of oil combustion and sulphate contribution in the harbour direction with respect to the direction of the urban area and an increase of the V/Ni ratio. This allowed for the use of V as marker of primary ship contribution to PM2.5 (2.8%+/-1.1%). The secondary contribution of oil combustion to non-sea-salt-sulphate, nssSO4(2-), was estimated to be 1.3 μg/m(3) (about 40% of total nssSO4(2-) or 11% of PM2.5). Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Tuning into Scorpius X-1: adapting a continuous gravitational-wave search for a known binary system

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Goetz, Evan; Riles, Keith

    2016-05-01

    We describe how the TwoSpect data analysis method for continuous gravitational waves (GWs) has been tuned for directed sources such as the low-mass X-ray binary (LMXB), Scorpius X-1 (Sco X-1). A comparison of five search algorithms generated simulations of the orbital and GW parameters of Sco X-1. Whereas that comparison focused on relative performance, here the simulations help quantify the sensitivity enhancement and parameter estimation abilities of this directed method, derived from an all-sky search for unknown sources, using doubly Fourier-transformed data. Sensitivity is shown to be enhanced when the source sky location and period are known, because we can run a fully templated search, bypassing the all-sky hierarchical stage using an incoherent harmonic sum. The GW strain and frequency, as well as the projected semi-major axis of the binary system, are recovered and uncertainty estimated, for simulated signals that are detected. Upper limits for GW strain are set for undetected signals. Applications to future GW observatory data are discussed. Robust against spin-wandering and computationally tractable despite an unknown frequency, this directed search is an important new tool for finding gravitational signals from LMXBs.

  17. Ammonia emissions from non-agricultural sources in the UK

    NASA Astrophysics Data System (ADS)

    Sutton, M. A.; Dragosits, U.; Tang, Y. S.; Fowler, D.

    A detailed literature review has been undertaken of the magnitude of non-agricultural sources of ammonia (NH 3) in the United Kingdom. Key elements of the work included estimation of nitrogen (N) excreted by different sources (birds, animals, babies, human sweat), review of miscellaneous combustion sources, as well as identification of industrial sources and use of NH 3 as a solvent. Overall the total non-agricultural emission of NH 3 from the UK in 1996 is estimated here as 54 (27-106) kt NH 3-N yr -1, although this includes 11 (6-23) kt yr -1 from agriculture related sources (sewage sludge spreading, biomass burning and agro-industry). Compared with previous estimates for 1990, component source magnitudes have changed both because of revised average emissions per source unit (emission factors) and changes in the source activity between 1990 and 1996. Sources with larger average emission factors than before include horses, wild animals and sea bird colonies, industry, sugar beet processing, household products and non-agricultural fertilizer use, with the last three sources being included for the first time. Sources with smaller emission factors than before include: land spreading of sewage sludge, direct human emissions (sweat, breath, smoking, infants), pets (cats and dogs) and fertilizer manufacture. Between 1990 and 1996 source activities increased for sewage spreading (due to reduced dumping at sea) and transport (due to increased use of catalytic converters), but decreased for coal combustion. Combined with the current UK estimates of agricultural NH 3 emissions of 229 kt N yr -1 (1996), total UK NH 3 emissions are estimated at 283 kt N yr -1. Allowing for an import of reduced nitrogen (NH x) of 30 kt N yr -1 and deposition of 230 kt N yr -1, these figures imply an export of 83 kt NH 3-N yr -1. Although export is larger than previously estimated, due to the larger contribution of non-agricultural NH 3 emissions, it is still insufficient to balance the UK budget, for which around 150 kt NH 3-N are estimated to be exported. The shortfall in the budget is, nevertheless, well within the range of uncertainty of the total emissions.

  18. Comparison of ocean mass content change from direct and inversion based approaches

    NASA Astrophysics Data System (ADS)

    Uebbing, Bernd; Kusche, Jürgen; Rietbroek, Roelof

    2017-04-01

    The GRACE satellite mission provides an indispensable tool for measuring oceanic mass variations. Such time series are essential to separate global mean sea level rise in thermosteric and mass driven contributions, and thus to constrain ocean heat content and (deep) ocean warming when viewed together with altimetry and Argo data. However, published estimates over the GRACE era differ, not only depending on the time window considered. Here, we will look into sources of such differences with direct and inverse approaches. Deriving ocean mass time series requires several processing steps; choosing a GRACE (and altimetry and Argo) product, data coverage, masks and filters to be applied in either spatial or spectral domain, corrections related to spatial leakage, GIA and geocenter motion need to be accounted for. In this study, we quantify the effects of individual processing choices and assumptions of the direct and inversion based approaches to derive ocean mass content change. Furthermore, we compile the different estimates from existing literature and sources, to highlight the differences.

  19. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  20. Sources of atmospheric methane in the south Florida environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harriss, R.C.; Sebacher, D.I.; Bartlett, K.B.

    1988-09-01

    Direct measurement of methane (CH{sub 4}) flux from wetland ecosystems of south Florida demonstrates that freshwater wet prairies and inundated saw-grass marsh are the dominant sources of atmospheric CH{sub 4} in the region. Fluctuations in soil moisture are an important environmental factor controlling both seasonal and interannual fluctuations in CH{sub 4} emissions from undisturbed wetlands. Land use estimates for 1,900 and 1,973 were used to calculate regional CH{sub 4} flux. Human settlement in south Florida has modified wetland sources of CH{sub 4}, reducing the natural prairies and marsh sources by 37%. During the same period, impoundments and disturbed wetlands weremore » created which produce CH{sub 4} at rates approximately 50% higher than the natural wetlands they replaced. Preliminary estimates of urban and ruminant sources of CH{sub 4} based on extrapolation from literature data indicate these sources may now contribute approximately 23% of the total regional source. It was estimated that the integrated effects of urban and agricultural development in south Florida between 1,900 and 1,973 resulted in a 26% enhancement in CH{sub 4} flux to the troposphere. 35 refs., 3 figs., 6 tabs.« less

  1. Contributions of atmospheric nitrogen deposition to U.S. estuaries: Summary and conclusions: Chapter 8

    USGS Publications Warehouse

    Stacey, Paul E.; Greening, Holly; Kremer, James N.; Peterson, David; Tomasko, David A.; Valigura, Richard A.; Alexander, Richard B.; Castro, Mark S.; Meyers, Tilden P.; Paerl, Hans W.; Stacey, Paul E.; Turner, R. Eugene

    2001-01-01

    A NOAA project was initiated in 1998, with support from the U.S. EPA, to develop state-of-the-art estimates of atmospheric N deposition to estuarine watersheds and water surfaces and its delivery to the estuaries. Work groups were formed to address N deposition rates, indirect (from the watershed) yields from atmospheric and other anthropogenic sources, and direct deposition on the estuarine waterbodies, and to evaluate the levels of uncertainty within the estimates. Watershed N yields were estimated using both a land-use based process approach and a national (SPARROW) model, compared to each other, and compared to estimates of N yield from the literature. The total N yields predicted by the national model were similar to values found in the literature and the land-use derived estimates were consistently higher. Atmospheric N yield estimates were within a similar range for the two approaches, but tended to be higher in the land-use based estimates and were not wellcorrelated. Median atmospheric N yields were around 15% of the total N yield for both groups, but ranged as high as 60% when both direct and indirect deposition were considered. Although not the dominant source of anthropogenic N, atmospheric N is, and will undoubtedly continue to be, an important factor in culturally eutrophied estuarine systems, warranting additional research and management attention.

  2. Two-Component Structure of the Radio Source 0014+813 from VLBI Observations within the CONT14 Program

    NASA Astrophysics Data System (ADS)

    Titov, O. A.; Lopez, Yu. R.

    2018-03-01

    We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.

  3. Infant mortality in the Marshall Islands.

    PubMed

    Levy, S J; Booth, H

    1988-12-01

    Levy and Booth present previously unpublished infant mortality rates for the Marshall Islands. They use an indirect method to estimate infant mortality from the 1973 and 1980 censuses, then apply indirect and direct methods of estimation to data from the Marshall Islands Women's Health Survey of 1985. Comparing the results with estimates of infant mortality obtained from vital registration data enables them to estimate the extent of underregistration of infant deaths. The authors conclude that 1973 census appears to be the most valid information source. Direct estimates from the Women's Health Survey data suggest that infant mortality has increased since 1970-1974, whereas the indirect estimates indicate a decreasing trend in infant mortality rates, converging with the direct estimates in more recent years. In view of increased efforts to improve maternal and child health in the mid-1970s, the decreasing trend is plausible. It is impossible to estimate accurately infant mortality in the Marshall Islands during 1980-1984 from the available data. Estimates based on registration data for 1975-1979 are at least 40% too low. The authors speculate that the estimate of 33 deaths per 1000 live births obtained from registration data for 1984 is 40-50% too low. In round figures, a value of 60 deaths per 1000 may be taken as the final estimate for 1980-1984.

  4. Sparsity-Aware DOA Estimation Scheme for Noncircular Source in MIMO Radar

    PubMed Central

    Wang, Xianpeng; Wang, Wei; Li, Xin; Liu, Qi; Liu, Jing

    2016-01-01

    In this paper, a novel sparsity-aware direction of arrival (DOA) estimation scheme for a noncircular source is proposed in multiple-input multiple-output (MIMO) radar. In the proposed method, the reduced-dimensional transformation technique is adopted to eliminate the redundant elements. Then, exploiting the noncircularity of signals, a joint sparsity-aware scheme based on the reweighted l1 norm penalty is formulated for DOA estimation, in which the diagonal elements of the weight matrix are the coefficients of the noncircular MUSIC-like (NC MUSIC-like) spectrum. Compared to the existing l1 norm penalty-based methods, the proposed scheme provides higher angular resolution and better DOA estimation performance. Results from numerical experiments are used to show the effectiveness of our proposed method. PMID:27089345

  5. The Chandra Source Catalog: Background Determination and Source Detection

    NASA Astrophysics Data System (ADS)

    McCollough, Michael; Rots, Arnold; Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Hain, Roger; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Danny G. Gibbs, II; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; He, Xiang Qun (Helen); Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Siemiginowska, Aneta L.; Sundheim, Beth A.; Tibbetts, Michael S.; van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2009-09-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory are used to generate one of the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  6. Chandra Source Catalog: Background Determination and Source Detection

    NASA Astrophysics Data System (ADS)

    McCollough, Michael L.; Rots, A. H.; Primini, F. A.; Evans, I. N.; Glotfelty, K. J.; Hain, R.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) is a major project in which all of the pointed imaging observations taken by the Chandra X-Ray Observatory will used to generate the most extensive X-ray source catalog produced to date. Early in the development of the CSC it was recognized that the ability to estimate local background levels in an automated fashion would be critical for essential CSC tasks such as source detection, photometry, sensitivity estimates, and source characterization. We present a discussion of how such background maps are created directly from the Chandra data and how they are used in source detection. The general background for Chandra observations is rather smoothly varying, containing only low spatial frequency components. However, in the case of ACIS data, a high spatial frequency component is added that is due to the readout streaks of the CCD chips. We discuss how these components can be estimated reliably using the Chandra data and what limitations and caveats should be considered in their use. We will discuss the source detection algorithm used for the CSC and the effects of the background images on the detection results. We will also touch on some the Catalog Inclusion and Quality Assurance criteria applied to the source detection results. This work is supported by NASA contract NAS8-03060 (CXC).

  7. Direct-location versus verbal report methods for measuring auditory distance perception in the far field.

    PubMed

    Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O

    2018-06-01

    In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.

  8. 2-D Path Corrections for Local and Regional Coda Waves: A Test of Transportability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayeda, K M; Malagnini, L; Phillips, W S

    2005-07-13

    Reliable estimates of the seismic source spectrum are necessary for accurate magnitude, yield, and energy estimation. In particular, how seismic radiated energy scales with increasing earthquake size has been the focus of recent debate within the community and has direct implications on earthquake source physics studies as well as hazard mitigation. The 1-D coda methodology of Mayeda et al. [2003] has provided the lowest variance estimate of the source spectrum when compared against traditional approaches that use direct S-waves, thus making it ideal for networks that have sparse station distribution. The 1-D coda methodology has been mostly confined to regionsmore » of approximately uniform complexity. For larger, more geophysically complicated regions, 2-D path corrections may be required. We will compare performance of 1-D versus 2-D path corrections in a variety of regions. First, the complicated tectonics of the northern California region coupled with high quality broadband seismic data provides for an ideal ''apples-to-apples'' test of 1-D and 2-D path assumptions on direct waves and their coda. Next, we will compare results for the Italian Alps using high frequency data from the University of Genoa. For Northern California, we used the same station and event distribution and compared 1-D and 2-D path corrections and observed the following results: (1) 1-D coda results reduced the amplitude variance relative to direct S-waves by roughly a factor of 8 (800%); (2) Applying a 2-D correction to the coda resulted in up to 40% variance reduction from the 1-D coda results; (3) 2-D direct S-wave results, though better than 1-D direct waves, were significantly worse than the 1-D coda. We found that coda-based moment-rate source spectra derived from the 2-D approach were essentially identical to those from the 1-D approach for frequencies less than {approx}0.7-Hz, however for the high frequencies (0.7 {le} f {le} 8.0-Hz), the 2-D approach resulted in inter-station scatter that was generally 10-30% smaller. For complex regions where data are plentiful, a 2-D approach can significantly improve upon the simple 1-D assumption. In regions where only 1-D coda correction is available it is still preferable over 2-D direct wave-based measures.« less

  9. Sayarim Infrasound Calibration Explosion: Near-Source and Local Observations and Yield Estimation

    DTIC Science & Technology

    2010-09-01

    Institute of Israel (GII) at Sayarim Military Range (SMR), Negev Desert, Israel, on 26 August 2009. Near-source high-pressure values, measured...possible an upward directivity effect and asymmetric energy radiation to the atmosphere. Clear infrasound signals were well observed at distances up to...of Israel (GII) at Sayarim Military Range (SMR), Negev Desert, Israel, on 26 August 2009. Near-source high-pressure values, measured in the range 200

  10. Method and Apparatus for Reducing Noise from Near Ocean Surface Sources

    DTIC Science & Technology

    2001-10-01

    reducing the acoustic noise from near-surface 4 sources using an array processing technique that utilizes 5 Multiple Signal Classification ( MUSIC ...sources without 13 degrading the signal level and quality of the TOI. The present 14 invention utilizes a unique application of the MUSIC beamforming...specific algorithm that utilizes a 5 MUSIC technique and estimates the direction of arrival (DOA) of 6 the acoustic signal signals and generates output

  11. 'Between one and three million': towards the demographic reconstruction of a decade of Cambodian history (1970-79).

    PubMed

    Heuveline, P

    1998-03-01

    Estimates of mortality in Camabodia during the Khmer Rouge regime (1975-79) range from 20,000 deaths according to former Khmer Rouge sources, to over three million victims according to Vietnamese government sources. This paper uses an unusual data source - the 1992 electoral lists registered by the United Nations - to estimate the population size after the Khmer Rouge regime and the extent of "excess" mortality in the 1970s. These data also provide the first breakdown of population by single year of age, which allows analysis of the age structure of "excess" mortality and inference of the relative importance of violence as a cause of death in that period. The estimates derived here are more comparable with the higher estimates made in the past. In addition, the analysis of likely causes of death that could have generated the age pattern of "excess" mortality clearly shows a larger contribution of direct or violent mortality than has been previously recognized.

  12. The direct cost of epilepsy in the United States: A systematic review of estimates.

    PubMed

    Begley, Charles E; Durgin, Tracy L

    2015-09-01

    To develop estimates of the direct cost of epilepsy in the United States for the general epilepsy population and sub-populations by systematically comparing similarities and differences in types of estimates and estimation methods from recently published studies. Papers published since 1995 were identified by systematic literature search. Information on types of estimates, study designs, data sources, types of epilepsy, and estimation methods was extracted from each study. Annual per person cost estimates from methodologically similar studies were identified, converted to 2013 U.S. dollars, and compared. From 4,104 publications discovered in the literature search, 21 were selected for review. Three were added that were published after the search. Eighteen were identified that reported estimates of average annual direct costs for the general epilepsy population in the United States. For general epilepsy populations (comprising all clinically defined subgroups), total direct healthcare costs per person ranged from $10,192 to $47,862 and epilepsy-specific costs ranged from $1,022 to $19,749. Four recent studies using claims data from large general populations yielded relatively similar epilepsy-specific annual cost estimates ranging from $8,412 to $11,354. Although more difficult to compare, studies examining direct cost differences for epilepsy sub-populations indicated a consistent pattern of markedly higher costs for those with uncontrolled or refractory epilepsy, and for those with comorbidities. This systematic review found that various approaches have been used to estimate the direct costs of epilepsy in the United States. However, recent studies using large claims databases and similar methods allow estimation of the direct cost burden of epilepsy for the general disease population, and show that it is greater for some patient subgroups. Additional research is needed to further understand the broader economic burden of epilepsy and how it varies across subpopulations. Wiley Periodicals, Inc. © 2015 International League Against Epilepsy.

  13. Determination of differential arrival times by cross-correlating worldwide seismological data

    NASA Astrophysics Data System (ADS)

    Godano, M.; Nolet, G.; Zaroli, C.

    2012-12-01

    Cross-correlation delays are the preferred body wave observables in global tomography. Heterogeneity is the main factor influencing delay times found by cross-correlation. Not only the waveform, but also the arrival time itself is affected by differences in seismic velocity encountered along the way. An accurate method for estimating differential times of seismic arrivals across a regional array by cross-correlation was developed by VanDecar and Crosson [1990]. For the estimation of global travel time delays in different frequency bands, Sigloch and Nolet [2006] developed a method for the estimation of body wave delays using a matched filter, which requires the separate estimation of the source time function. Sigloch et al. [2008] found that waveforms often cluster in and opposite the direction of rupture propagation on the fault, confirming that the directivity effect is a major factor in shaping the waveform of large events. We propose a generalization of the VanDecar-Crosson method to which we add a correction for the directivity effect in the seismological data. The new method allows large events to be treated without the need to estimate the source time function for the computation of a matched synthetic waveform. The procedure consists in (1) the detection of the directivity effect in the data and the determination of a rupture model (unilateral or bilateral) explaining the differences in pulse duration among the stations, (2) the determination of an apparent fault rupture length explaining the pulse durations, (3) the removal of the delay due to the directivity effect in the pulse duration , by stretching or contracting the seismograms for directive and anti-directive stations respectively and (4) the application of a generalized VanDecar and Crosson method using only delays between pairs of stations that have an acceptable correlation coefficient. We validate our method by performing tests on synthetic data. Results show that the error between theoretical and measured differential arrival time are significantly reduced for the corrected data. We illustrate our method on data from several real earthquakes.

  14. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  15. Calculating depths to shallow magnetic sources using aeromagnetic data from the Tucson Basin

    USGS Publications Warehouse

    Casto, Daniel W.

    2001-01-01

    Using gridded high-resolution aeromagnetic data, the performance of several automated 3-D depth-to-source methods was evaluated over shallow control sources based on how close their depth estimates came to the actual depths to the tops of the sources. For all three control sources, only the simple analytic signal method, the local wavenumber method applied to the vertical integral of the magnetic field, and the horizontal gradient method applied to the pseudo-gravity field provided median depth estimates that were close (-11% to +14% error) to the actual depths. Careful attention to data processing was required in order to calculate a sufficient number of depth estimates and to reduce the occurrence of false depth estimates. For example, to eliminate sampling bias, high-frequency noise and interference from deeper sources, it was necessary to filter the data before calculating derivative grids and subsequent depth estimates. To obtain smooth spatial derivative grids using finite differences, the data had to be gridded at intervals less than one percent of the anomaly wavelength. Before finding peak values in the derived signal grids, it was necessary to remove calculation noise by applying a low-pass filter in the grid-line directions and to re-grid at an interval that enabled the search window to encompass only the peaks of interest. Using the methods that worked best over the control sources, depth estimates over geologic sites of interest suggested the possible occurrence of volcanics nearly 170 meters beneath a city landfill. Also, a throw of around 2 kilometers was determined for a detachment fault that has a displacement of roughly 6 kilometers.

  16. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  17. Bounding the Role of Black Carbon in the Climate System: A Scientific Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bond, Tami C.; Doherty, Sarah J.; Fahey, D. W.

    2013-06-06

    Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. Predominant sources are combustion related; namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr-1 in the year 2000 with an uncertainty range of 2000 to 29000. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that ismore » quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption, influence on liquid, mixed-phase, and ice clouds, and deposition on snow and ice. These effects are calculated with models, but when possible, they are evaluated with both microphysical measurements and field observations. Global atmospheric absorption attributable to black carbon is too low in many models, and should be increased by about about 60%. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of black carbon is +0.43 W m-2 with 90% uncertainty bounds of (+0.17, +0.68) W m-2. Total direct forcing by all black carbon sources in the present day is estimated as +0.49 (+0.20, +0.76) W m-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings and their rapid responses and feedbacks. The best estimate of industrial-era (1750 to 2005) climate forcing of black carbon through all forcing mechanisms is +0.77 W m-2 with 90% uncertainty bounds of +-0.06 to +1.53 W m-2. Thus, there is a 96% probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. With a value of +0.77 W m-2, black carbon is likely the second most important individual climate-forcing agent in the industrial era, following carbon dioxide. Sources that emit black carbon also emit other short- lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of co- emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil-fuel and biofuel) have a net climate forcing of +0.004 (-0.62 to +0.57) W m-2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all black- carbon-rich sources becomes slightly negative (-0.08 W m-2 with 90% uncertainty bounds of -1.23 to +0.81 W m-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.« less

  18. Optimal Estimation of the Carbonyl Sulfide Surface Flux Through Inverse Modeling of TES Observations

    NASA Astrophysics Data System (ADS)

    Kuai, L.; Worden, J.; Lee, M.; Campbell, J. E.; Kulawik, S. S.; Weidner, R. J.

    2014-12-01

    Carbonyl sulfide (OCS) is the most abundant sulfur gas in the troposphere with a global averaging mixing ratio of about 500 part per trillion (ppt). The ocean is the primary source of OCS, emitting OCS directly or its precursors, carbon disulfide and dimethyl sulfide. The most important atmospheric sink of OCS is uptake by terrestrial plants via photosynthesis. Although the global budget of atmospheric OCS has been studied, the global integrated OCS fluxes have large uncertainties, e.g. the uncertainties of the ocean fluxes are as large as 100% or more and a large missing ocean sources required to balance the global budgets. A first tropical ocean map of the free tropospheric OCS has been developed using retrieval data from radiance measurements from the AURA Tropospheric Emission Spectrometer (TES). The monthly mean ocean data has been evaluated to estimate the biases and uncertainties in the TES OCS against aircraft profiles from the HIPPO campaign and ground data from the NOAA Mauna Loa site. We found the TES OCS data to be consistent (within the calculated uncertainties) with NOAA ground observations and HIPPO aircraft measurements and it captured the seasonal and latitudinal variations observed by these in situ data within the estimated uncertainties. In this study, we first update bottom-up estimate of global source and sinks of atmospheric OCS. The global forward simulations of atmospheric OCS using updated bottom-up fluxes with GEOS-Chem show improvement of the seasonal variation over multiple NOAA ground stations in both north and south hemispheres. Inverse analysis of surface fluxes from TES OCS data will provide further constraints to estimate the missing ocean source and understand the enhanced OCS over eastern Asia and west Pacific, which could be driven by wind, Asian outflow, a mystery process, or a combination of all of the above. The investigation will provide the fundamental measurements and analysis needed to estimate the missing source in the sulfur cycle and provide the framework for extending the TES algorithm to land retrievals, which can be used directly in studies of carbon-climate feedbacks.

  19. Emission Patterns of Solar Type III Radio Bursts: Stereoscopic Observations

    NASA Technical Reports Server (NTRS)

    Thejappa, G.; MacDowall, R.; Bergamo, M.

    2012-01-01

    Simultaneous observations of solar type III radio bursts obtained by the STEREO A, B, and WIND spacecraft at low frequencies from different vantage points in the ecliptic plane are used to determine their directivity. The heliolongitudes of the sources of these bursts, estimated at different frequencies by assuming that they are located on the Parker spiral magnetic field lines emerging from the associated active regions into the spherically symmetric solar atmosphere, and the heliolongitudes of the spacecraft are used to estimate the viewing angle, which is the angle between the direction of the magnetic field at the source and the line connecting the source to the spacecraft. The normalized peak intensities at each spacecraft Rj = Ij /[Sigma]Ij (the subscript j corresponds to the spacecraft STEREO A, B, and WIND), which are defined as the directivity factors are determined using the time profiles of the type III bursts. It is shown that the distribution of the viewing angles divides the type III bursts into: (1) bursts emitting into a very narrow cone centered around the tangent to the magnetic field with angular width of approximately 2 deg and (2) bursts emitting into a wider cone with angular width spanning from [approx] -100 deg to approximately 100 deg. The plots of the directivity factors versus the viewing angles of the sources from all three spacecraft indicate that the type III emissions are very intense along the tangent to the spiral magnetic field lines at the source, and steadily fall as the viewing angles increase to higher values. The comparison of these emission patterns with the computed distributions of the ray trajectories indicate that the intense bursts visible in a narrow range of angles around the magnetic field directions probably are emitted in the fundamental mode, whereas the relatively weaker bursts visible to a wide range of angles are probably emitted in the harmonic mode.

  20. Communication Breakdown: DHS Operations During a Cyber Attack

    DTIC Science & Technology

    2010-12-01

    is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of...Presidential Directive, Malware, National Exercise, Quadrennial Homeland Security Review , Trusted Internet Connections, Zero-Day Exploits 16. PRICE CODE 17

  1. Joint Direct Attack Munition (JDAM)

    DTIC Science & Technology

    2013-12-01

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to

  2. Effects of atmospheric variations on acoustic system performance

    NASA Technical Reports Server (NTRS)

    Nation, Robert; Lang, Stephen; Olsen, Robert; Chintawongvanich, Prasan

    1993-01-01

    Acoustic propagation over medium to long ranges in the atmosphere is subject to many complex, interacting effects. Of particular interest at this point is modeling low frequency (less than 500 Hz) propagation for the purpose of predicting ranges and bearing accuracies at which acoustic sources can be detected. A simple means of estimating how much of the received signal power propagated directly from the source to the receiver and how much was received by turbulent scattering was developed. The correlations between the propagation mechanism and detection thresholds, beamformer bearing estimation accuracies, and beamformer processing gain of passive acoustic signal detection systems were explored.

  3. Tracing methamphetamine and amphetamine sources in wastewater and receiving waters via concentration and enantiomeric profiling.

    PubMed

    Xu, Zeqiong; Du, Peng; Li, Kaiyang; Gao, Tingting; Wang, Zhenglu; Fu, Xiaofang; Li, Xiqing

    2017-12-01

    Wastewater analysis is a promising approach to monitor illicit drug abuse of a community. However, drug use estimation via wastewater analysis may be biased by sources other than abuse. This is especially true for methamphetamine and amphetamine as their presence in wastewater may come from many sources, such as direct disposal or excretion following administration of prescription drugs. Here we traced methamphetamine and amphetamine sources via concentration and enantiomeric profiling of the two compounds from black market to receiving waters. Methamphetamine in wastewater was found to predominantly arise from abuse, proving the feasibility of using wastewater analysis for estimating its consumption in China. Amphetamine abuse was previously considered negligible in East and Southeast Asia. However, we found that amphetamine was abused considerably (up to 90.7mg/1000inh/day) in a significant number (>20%) of major cities in China. Combined concentration and enantiomeric profiling also revealed direct disposal into receiving waters of methamphetamine manufactured by different processes. These findings have important implications for monitoring of and law enforcement against methamphetamine/amphetamine abuse and related crimes in China and abroad. Copyright © 2017. Published by Elsevier B.V.

  4. Sources of international migration statistics in Africa.

    PubMed

    1984-01-01

    The sources of international migration data for Africa may be classified into 2 main categories: administrative records and 2) censuses and survey data. Both categories are sources for the direct measurement of migration, but the 2nd category can be used for the indirect estimation of net international migration. The administrative records from which data on international migration may be derived include 1) entry/departure cards or forms completed at international borders, 2) residence/work permits issued to aliens, and 3) general population registers and registers of aliens. The statistics derived from the entry/departure cards may be described as 1) land frontier control statistics and 2) port control statistics. The former refer to data derived from movements across land borders and the latter refer to information collected at international airports and seaports. Other administrative records which are potential sources of statistics on international migration in some African countries include some limited population registers, records of the registration of aliens, and particulars of residence/work permits issued to aliens. Although frontier control data are considered the most important source of international migration statistics, in many African countries these data are too deficient to provide a satisfactory indication of the level of international migration. Thus decennial population censuses and/or sample surveys are the major sources of the available statistics on the stock and characteristics of international migration. Indirect methods can be used to supplement census data with intercensal estimates of net migration using census data on the total population. This indirect method of obtaining information on migration can be used to evaluate estimates derived from frontier control records, and it also offers the means of obtaining alternative information on international migration in African countries which have not directly investigated migration topics in their censuses or surveys.

  5. To compute lightness, illumination is not estimated, it is held constant.

    PubMed

    Gilchrist, Alan L

    2018-05-03

    The light reaching the eye from a surface does not indicate the black-gray-white shade of a surface (called lightness) because the effects of illumination level are confounded with the reflectance of the surface. Rotating a gray paper relative to a light source alters its luminance (intensity of light reaching the eye) but the lightness of the paper remains relatively constant. Recent publications have argued, as had Helmholtz (1866/1924), that the visual system unconsciously estimates the direction and intensity of the light source. We report experiments in which this theory was pitted against an alternative theory according to which illumination level and surface reflectance are disentangled by comparing only those surfaces that are equally illuminated, in other words, by holding illumination level constant. A 3-dimensional scene was created within which the rotation of a target surface would be expected to become darker gray according to the lighting estimation theory, but lighter gray according to the equi-illumination comparison theory, with results clearly favoring the latter. In a further experiment cues held to indicate light source direction (cast shadows, attached shadows, and glossy highlights) were completely eliminated and yet this had no effect on the results. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Household Transmission of Vibrio cholerae in Bangladesh

    PubMed Central

    Sugimoto, Jonathan D.; Koepke, Amanda A.; Kenah, Eben E.; Halloran, M. Elizabeth; Chowdhury, Fahima; Khan, Ashraful I.; LaRocque, Regina C.; Yang, Yang; Ryan, Edward T.; Qadri, Firdausi; Calderwood, Stephen B.; Harris, Jason B.; Longini, Ira M.

    2014-01-01

    Background Vibrio cholerae infections cluster in households. This study's objective was to quantify the relative contribution of direct, within-household exposure (for example, via contamination of household food, water, or surfaces) to endemic cholera transmission. Quantifying the relative contribution of direct exposure is important for planning effective prevention and control measures. Methodology/Principal Findings Symptom histories and multiple blood and fecal specimens were prospectively collected from household members of hospital-ascertained cholera cases in Bangladesh from 2001–2006. We estimated the probabilities of cholera transmission through 1) direct exposure within the household and 2) contact with community-based sources of infection. The natural history of cholera infection and covariate effects on transmission were considered. Significant direct transmission (p-value<0.0001) occurred among 1414 members of 364 households. Fecal shedding of O1 El Tor Ogawa was associated with a 4.9% (95% confidence interval: 0.9%–22.8%) risk of infection among household contacts through direct exposure during an 11-day infectious period (mean length). The estimated 11-day risk of O1 El Tor Ogawa infection through exposure to community-based sources was 2.5% (0.8%–8.0%). The corresponding estimated risks for O1 El Tor Inaba and O139 infection were 3.7% (0.7%–16.6%) and 8.2% (2.1%–27.1%) through direct exposure, and 3.4% (1.7%–6.7%) and 2.0% (0.5%–7.3%) through community-based exposure. Children under 5 years-old were at elevated risk of infection. Limitations of the study may have led to an underestimation of the true risk of cholera infection. For instance, available covariate data may have incompletely characterized levels of pre-existing immunity to cholera infection. Transmission via direct exposure occurring outside of the household was not considered. Conclusions Direct exposure contributes substantially to endemic transmission of symptomatic cholera in an urban setting. We provide the first estimate of the transmissibility of endemic cholera within prospectively-followed members of households. The role of direct transmission must be considered when planning cholera control activities. PMID:25411971

  7. Bounding the role of black carbon in the climate system: A scientific assessment

    NASA Astrophysics Data System (ADS)

    Bond, T. C.; Doherty, S. J.; Fahey, D. W.; Forster, P. M.; Berntsen, T.; DeAngelo, B. J.; Flanner, M. G.; Ghan, S.; Kärcher, B.; Koch, D.; Kinne, S.; Kondo, Y.; Quinn, P. K.; Sarofim, M. C.; Schultz, M. G.; Schulz, M.; Venkataraman, C.; Zhang, H.; Zhang, S.; Bellouin, N.; Guttikunda, S. K.; Hopke, P. K.; Jacobson, M. Z.; Kaiser, J. W.; Klimont, Z.; Lohmann, U.; Schwarz, J. P.; Shindell, D.; Storelvmo, T.; Warren, S. G.; Zender, C. S.

    2013-06-01

    carbon aerosol plays a unique and important role in Earth's climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m-2 with 90% uncertainty bounds of (+0.08, +1.27) W m-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m-2 with 90% uncertainty bounds of +0.17 to +2.1 W m-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (-0.50 to +1.08) W m-2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (-0.06 W m-2 with 90% uncertainty bounds of -1.45 to +1.29 W m-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

  8. Bounding the Role of Black Carbon in the Climate System: a Scientific Assessment

    NASA Technical Reports Server (NTRS)

    Bond, T. C.; Doherty, S. J.; Fahey, D. W.; Forster, P. M.; Bernsten, T.; DeAngelo, B. J.; Flanner, M. G.; Ghan, S.; Karcher, B.; Koch, D.; hide

    2013-01-01

    Black carbon aerosol plays a unique and important role in Earth's climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg/yr in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W/sq m with 90% uncertainty bounds of (+0.08, +1.27)W/sq m. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W/sq m. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W/sq m with 90% uncertainty bounds of +0.17 to +2.1 W/sq m. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W/sq m, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (0.50 to +1.08) W/sq m during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (0.06 W/sq m with 90% uncertainty bounds of 1.45 to +1.29 W/sq m). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.

  9. Hanford Environmental Dose Reconstruction Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  10. Effects of Directed Energy Weapons

    DTIC Science & Technology

    1994-01-01

    them, and led to the law of conser- vation of energy. 2. The estimate of the energy it takes to brew a cup of coffee as- sumes that it is a 6 oz cup...the thermal diffusivity of the target material (see Figure 1–5). We can use this result to estimate the threshold for melting. A laser of intensity S...is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and

  11. Economic Impact of Cystic Echinococcosis in Peru

    PubMed Central

    Moro, Pedro L.; Budke, Christine M.; Schantz, Peter M.; Vasquez, Julio; Santivañez, Saul J.; Villavicencio, Jaime

    2011-01-01

    Background Cystic echinococcosis (CE) constitutes an important public health problem in Peru. However, no studies have attempted to estimate the monetary and non-monetary impact of CE in Peruvian society. Methods We used official and published sources of epidemiological and economic information to estimate direct and indirect costs associated with livestock production losses and human disease in addition to surgical CE-associated disability adjusted life years (DALYs) lost. Findings The total estimated cost of human CE in Peru was U.S.$2,420,348 (95% CI:1,118,384–4,812,722) per year. Total estimated livestock-associated costs due to CE ranged from U.S.$196,681 (95% CI:141,641–251,629) if only direct losses (i.e., cattle and sheep liver destruction) were taken into consideration to U.S.$3,846,754 (95% CI:2,676,181–4,911,383) if additional production losses (liver condemnation, decreased carcass weight, wool losses, decreased milk production) were accounted for. An estimated 1,139 (95% CI: 861–1,489) DALYs were also lost due to surgical cases of CE. Conclusions This preliminary and conservative assessment of the socio-economic impact of CE on Peru, which is based largely on official sources of information, very likely underestimates the true extent of the problem. Nevertheless, these estimates illustrate the negative economic impact of CE in Peru. PMID:21629731

  12. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  13. A probabilistic framework for single-sensor acoustic emission source localization in thin metallic plates

    NASA Astrophysics Data System (ADS)

    Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-09-01

    Tracking edge-reflected acoustic emission (AE) waves can allow the localization of their sources. Specifically, in bounded isotropic plate structures, only one sensor may be used to perform these source localizations. The primary goal of this paper is to develop a three-step probabilistic framework to quantify the uncertainties associated with such single-sensor localizations. According to this framework, a probabilistic approach is first used to estimate the direct distances between AE sources and the sensor. Then, an analytical model is used to reconstruct the envelope of edge-reflected AE signals based on the source-to-sensor distance estimations and their first arrivals. Finally, the correlation between the probabilistically reconstructed envelopes and recorded AE signals are used to estimate confidence contours for the location of AE sources. To validate the proposed framework, Hsu-Nielsen pencil lead break (PLB) tests were performed on the surface as well as the edges of an aluminum plate. The localization results show that the estimated confidence contours surround the actual source locations. In addition, the performance of the framework was tested in a noisy environment simulated by two dummy transducers and an arbitrary wave generator. The results show that in low-noise environments, the shape and size of the confidence contours depend on the sources and their locations. However, at highly noisy environments, the size of the confidence contours monotonically increases with the noise floor. Such probabilistic results suggest that the proposed probabilistic framework could thus provide more comprehensive information regarding the location of AE sources.

  14. Antidepressant direct-to-consumer advertising and social perception of the prevalence of depression: application of the availability heuristic.

    PubMed

    An, Soontae

    2008-11-01

    This study examined the effect of antidepressant direct-to-consumer advertising (DTCA) on perceived prevalence of depression. A survey of Midwestern residents showed that those with high recall for antidepressant DTCA tended to estimate the prevalence of depression higher than those with low ad recall. However, with a source-priming cue before their estimation, the significant association was eliminated. Results indicate that people use antidepressant DTCA as a basis for their judgment of the prevalence of depression in normal situations where the veracity of information is not highlighted.

  15. Matched Bearing Processing for Airborne Source Localization by an Underwater Horizontal Line Array

    NASA Astrophysics Data System (ADS)

    Peng, Zhao-Hui; Li, Zheng-Lin; Wang, Guang-Xu

    2010-11-01

    Location of an airborne source is estimated from signals measured by a horizontal line array (HLA), based on the fact that a signal transmitted by an airborne source will reach a underwater hydrophone in different ways: via a direct refracted path, via one or more bottom and surface reflections, via the so-called lateral wave. As a result, when an HLA near the airborne source is used for beamforming, several peaks at different bearing angles will appear. By matching the experimental beamforming outputs with the predicted outputs for all source locations, the most likely location is the one which gives minimum difference. An experiment is conducted for airborne source localization in the Yellow Sea in October 2008. An HLA was laid on the sea bottom at the depth of 30m. A high-power loudspeaker was hung on a research ship floating near the HLA and sent out LFM pulses. The estimated location of the loudspeaker is in agreement well with the GPS measurements.

  16. DOA-informed source extraction in the presence of competing talkers and background noise

    NASA Astrophysics Data System (ADS)

    Taseska, Maja; Habets, Emanuël A. P.

    2017-12-01

    A desired speech signal in hands-free communication systems is often degraded by noise and interfering speech. Even though the number and locations of the interferers are often unknown in practice, it is justified to assume in certain applications that the direction-of-arrival (DOA) of the desired source is approximately known. Using the known DOA, fixed spatial filters such as the delay-and-sum beamformer can be steered to extract the desired source. However, it is well-known that fixed data-independent spatial filters do not provide sufficient reduction of directional interferers. Instead, the DOA information can be used to estimate the statistics of the desired and the undesired signals and to compute optimal data-dependent spatial filters. One way the DOA is exploited for optimal spatial filtering in the literature, is by designing DOA-based narrowband detectors to determine whether a desired or an undesired signal is dominant at each time-frequency (TF) bin. Subsequently, the statistics of the desired and the undesired signals can be estimated during the TF bins where the respective signal is dominant. In a similar manner, a Gaussian signal model-based detector which does not incorporate DOA information has been used in scenarios where the undesired signal consists of stationary background noise. However, when the undesired signal is non-stationary, resulting for example from interfering speakers, such a Gaussian signal model-based detector is unable to robustly distinguish desired from undesired speech. To this end, we propose a DOA model-based detector to determine the dominant source at each TF bin and estimate the desired and undesired signal statistics. We demonstrate that data-dependent spatial filters that use the statistics estimated by the proposed framework achieve very good undesired signal reduction, even when using only three microphones.

  17. Noise Source Identification in a Reverberant Field Using Spherical Beamforming

    NASA Astrophysics Data System (ADS)

    Choi, Young-Chul; Park, Jin-Ho; Yoon, Doo-Byung; Kwon, Hyu-Sang

    Identification of noise sources, their locations and strengths, has been taken great attention. The method that can identify noise sources normally assumes that noise sources are located at a free field. However, the sound in a reverberant field consists of that coming directly from the source plus sound reflected or scattered by the walls or objects in the field. In contrast to the exterior sound field, reflections are added to sound field. Therefore, the source location estimated by the conventional methods may give unacceptable error. In this paper, we explain the effects of reverberant field on interior source identification process and propose the method that can identify noise sources in the reverberant field.

  18. A novel approach to neutron dosimetry.

    PubMed

    Balmer, Matthew J I; Gamage, Kelum A A; Taylor, Graeme C

    2016-11-01

    Having been overlooked for many years, research is now starting to take into account the directional distribution of neutron workplace fields. Existing neutron dosimetry instrumentation does not account for this directional distribution, resulting in conservative estimates of dose in neutron workplace fields (by around a factor of 2, although this is heavily dependent on the type of field). This conservatism could influence epidemiological studies on the health effects of radiation exposure. This paper reports on the development of an instrument which can estimate the effective dose of a neutron field, accounting for both the direction and the energy distribution. A 6 Li-loaded scintillator was used to perform neutron assays at a number of locations in a 20 × 20 × 17.5 cm 3 water phantom. The variation in thermal and fast neutron response to different energies and field directions was exploited. The modeled response of the instrument to various neutron fields was used to train an artificial neural network (ANN) to learn the effective dose and ambient dose equivalent of these fields. All experimental data published in this work were measured at the National Physical Laboratory (UK). Experimental results were obtained for a number of radionuclide source based neutron fields to test the performance of the system. The results of experimental neutron assays at 25 locations in a water phantom were fed into the trained ANN. A correlation between neutron counting rates in the phantom and neutron fluence rates was experimentally found to provide dose rate estimates. A radionuclide source behind shadow cone was used to create a more complex field in terms of energy and direction. For all fields, the resulting estimates of effective dose rate were within 45% or better of their calculated values, regardless of energy distribution or direction for measurement times greater than 25 min. This work presents a novel, real-time, approach to workplace neutron dosimetry. It is believed that in the research presented in this paper, for the first time, a single instrument has been able to estimate effective dose.

  19. Component costs of foodborne illness: a scoping review

    PubMed Central

    2014-01-01

    Background Governments require high-quality scientific evidence to prioritize resource allocation and the cost-of-illness (COI) methodology is one technique used to estimate the economic burden of a disease. However, variable cost inventories make it difficult to interpret and compare costs across multiple studies. Methods A scoping review was conducted to identify the component costs and the respective data sources used for estimating the cost of foodborne illnesses in a population. This review was accomplished by: (1) identifying the research question and relevant literature, (2) selecting the literature, (3) charting, collating, and summarizing the results. All pertinent data were extracted at the level of detail reported in a study, and the component cost and source data were subsequently grouped into themes. Results Eighty-four studies were identified that described the cost of foodborne illness in humans. Most studies (80%) were published in the last two decades (1992–2012) in North America and Europe. The 10 most frequently estimated costs were due to illnesses caused by bacterial foodborne pathogens, with non-typhoidal Salmonella spp. being the most commonly studied. Forty studies described both individual (direct and indirect) and societal level costs. The direct individual level component costs most often included were hospital services, physician personnel, and drug costs. The most commonly reported indirect individual level component cost was productivity losses due to sick leave from work. Prior estimates published in the literature were the most commonly used source of component cost data. Data sources were not provided or specifically linked to component costs in several studies. Conclusions The results illustrated a highly variable depth and breadth of individual and societal level component costs, and a wide range of data sources being used. This scoping review can be used as evidence that there is a lack of standardization in cost inventories in the cost of foodborne illness literature, and to promote greater transparency and detail of data source reporting. By conforming to a more standardized cost inventory, and by reporting data sources in more detail, there will be an increase in cost of foodborne illness research that can be interpreted and compared in a meaningful way. PMID:24885154

  20. The effect of carrier gas flow rate and source cell temperature on low pressure organic vapor phase deposition simulation by direct simulation Monte Carlo method

    PubMed Central

    Wada, Takao; Ueda, Noriaki

    2013-01-01

    The process of low pressure organic vapor phase deposition (LP-OVPD) controls the growth of amorphous organic thin films, where the source gases (Alq3 molecule, etc.) are introduced into a hot wall reactor via an injection barrel using an inert carrier gas (N2 molecule). It is possible to control well the following substrate properties such as dopant concentration, deposition rate, and thickness uniformity of the thin film. In this paper, we present LP-OVPD simulation results using direct simulation Monte Carlo-Neutrals (Particle-PLUS neutral module) which is commercial software adopting direct simulation Monte Carlo method. By estimating properly the evaporation rate with experimental vaporization enthalpies, the calculated deposition rates on the substrate agree well with the experimental results that depend on carrier gas flow rate and source cell temperature. PMID:23674843

  1. Leveraging FIA data for analysis beyond forest reports: examples from the world of carbon

    Treesearch

    Brian F. Walters; Grant M. Domke; Christopher W. Woodall

    2015-01-01

    The Forest Inventory and Analysis program of the USDA Forest Service is the go-to source for data to estimate carbon stocks and stock changes for the annual national greenhouse gas inventory (NGHGI) of the United States. However, the different pools of forest carbon have not always been estimated directly from FIA measurements. As part of the new forest carbon...

  2. Regional Development Impacts Multi-Regional - Multi-Industry Model (MRMI) Users Manual,

    DTIC Science & Technology

    1982-09-01

    indicators, described in Chapter 2, are estimated as well. Finally, MRMI is flexible, as it can incorporate alternative macroeconomic , national inter...national and regional economic contexts and data sources for estimating macroeconomic and direct impacts data. Considerations for ensuring consistency...Chapter 4 is devoted to model execution and the interpretation of its output. As MRMI forecasts are based upon macroeconomic , national inter-industry

  3. Hanford Environmental Dose Reconstruction Project. Monthly report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates.

  4. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  5. Photochemical grid model performance with varying horizontal grid resolution and sub-grid plume treatment for the Martins Creek near-field SO2 study

    NASA Astrophysics Data System (ADS)

    Baker, Kirk R.; Hawkins, Andy; Kelly, James T.

    2014-12-01

    Near source modeling is needed to assess primary and secondary pollutant impacts from single sources and single source complexes. Source-receptor relationships need to be resolved from tens of meters to tens of kilometers. Dispersion models are typically applied for near-source primary pollutant impacts but lack complex photochemistry. Photochemical models provide a realistic chemical environment but are typically applied using grid cell sizes that may be larger than the distance between sources and receptors. It is important to understand the impacts of grid resolution and sub-grid plume treatments on photochemical modeling of near-source primary pollution gradients. Here, the CAMx photochemical grid model is applied using multiple grid resolutions and sub-grid plume treatment for SO2 and compared with a receptor mesonet largely impacted by nearby sources approximately 3-17 km away in a complex terrain environment. Measurements are compared with model estimates of SO2 at 4- and 1-km resolution, both with and without sub-grid plume treatment and inclusion of finer two-way grid nests. Annual average estimated SO2 mixing ratios are highest nearest the sources and decrease as distance from the sources increase. In general, CAMx estimates of SO2 do not compare well with the near-source observations when paired in space and time. Given the proximity of these sources and receptors, accuracy in wind vector estimation is critical for applications that pair pollutant predictions and observations in time and space. In typical permit applications, predictions and observations are not paired in time and space and the entire distributions of each are directly compared. Using this approach, model estimates using 1-km grid resolution best match the distribution of observations and are most comparable to similar studies that used dispersion and Lagrangian modeling systems. Model-estimated SO2 increases as grid cell size decreases from 4 km to 250 m. However, it is notable that the 1-km model estimates using 1-km meteorological model input are higher than the 1-km model simulation that used interpolated 4-km meteorology. The inclusion of sub-grid plume treatment did not improve model skill in predicting SO2 in time and space and generally acts to keep emitted mass aloft.

  6. The 2006 Java Earthquake revealed by the broadband seismograph network in Indonesia

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Miyakawa, K.; Yamashina, T.; Inoue, H.; Ishida, M.; Aoi, S.; Morikawa, N.; Harjadi, P.

    2006-12-01

    On May 27, 2006, local time, a moderate-size earthquake (Mw=6.4) occurred in central Java. This earthquake caused severe damages near Yogyakarta City, and killed more than 5700 people. To estimate the source mechanism and location of this earthquake, we performed a waveform inversion of the broadband seismograms recorded by a nationwide seismic network in Indonesia (Realtime-JISNET). Realtime-JISNET is a part of the broadband seismograph network developed by an international cooperation among Indonesia, Germany, China, and Japan, aiming at improving the capabilities to monitor seismic activity and tsunami generation in Indonesia. 12 stations in Realitme-JISNET were in operation when the earthquake occurred. We used the three-component seismograms from the two closest stations, which were located about 100 and 300 km from the source. In our analysis, we assumed pure double couple as the source mechanism, thus reducing the number of free parameters in the waveform inversion. Therefore we could stably estimate the source mechanism using the signals observed by a small number of seismic stations. We carried out a grid search with respect to strike, dip, and rake angles to investigate fault orientation and slip direction. We determined source-time functions of the moment-tensor components in the frequency domain for each set of strike, dip, and rake angles. We also conducted a spatial grid search to find the best-fit source location. The best-fit source was approximately 12 km SSE of Yogyakarta at a depth of 10 km below sea level, immediately below the area of extensive damage. The focal mechanism indicates that this earthquake was caused by compressive stress in the NS direction and strike-slip motion was dominant. The moment magnitude (Mw) was 6.4. We estimated the seismic intensity in the areas of severe damage using the source paramters and an empirical attenuation relation for averaged peak ground velocity (PGV) of horizontal seismic motion. We then calculated the instrumental modified Mercalli intensity (Imm) from the estimated PGV values. Our result indicates that strong ground motion with Imm of 7 or more occurred within 10 km of the earthquake fault, although the actual seismic intensity can be affected by shallow structural heterogeneity. We therefore conclude that the severe damages of the Java earthquake are attributed to the strong ground motion, which was primarily caused by the source located immediately below the populated areas.

  7. The cost of vision loss in Canada. 1. Methodology.

    PubMed

    Gordon, Keith D; Cruess, Alan F; Bellan, Lorne; Mitchell, Scott; Pezzullo, M Lynne

    2011-08-01

    This paper outlines the methodology used to estimate the cost of vision loss in Canada. The results of this study will be presented in a second paper. The cost of vision loss (VL) in Canada was estimated using a prevalence-based approach. This was done by estimating the number of people with VL in a base period (2007) and the costs associated with treating them. The cost estimates included direct health system expenditures on eye conditions that cause VL, as well as other indirect financial costs such as productivity losses. Estimates were also made of the value of the loss of healthy life, measured in Disability Adjusted Life Years or DALY's. To estimate the number of cases of VL in the population, epidemiological data on prevalence rates were applied to population data. The number of cases of VL was stratified by gender, age, ethnicity, severity and cause. The following sources were used for estimating prevalence: Population-based eye studies; Canadian Surveys; Canadian journal articles and research studies; and International Population Based Eye Studies. Direct health costs were obtained primarily from Health Canada and Canadian Institute for Health Information (CIHI) sources, while costs associated with productivity losses were based on employment information compiled by Statistics Canada and on economic theory of productivity loss. Costs related to vision rehabilitation (VR) were obtained from Canadian VR organizations. This study shows that it is possible to estimate the costs for VL for a country in the absence of ongoing local epidemiological studies. Copyright © 2011 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  8. Design of Small MEMS Microphone Array Systems for Direction Finding of Outdoors Moving Vehicles

    PubMed Central

    Zhang, Xin; Huang, Jingchang; Song, Enliang; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing

    2014-01-01

    In this paper, a MEMS microphone array system scheme is proposed which implements real-time direction of arrival (DOA) estimation for moving vehicles. Wind noise is the primary source of unwanted noise on microphones outdoors. A multiple signal classification (MUSIC) algorithm is used in this paper for direction finding associated with spatial coherence to discriminate between the wind noise and the acoustic signals of a vehicle. The method is implemented in a SHARC DSP processor and the real-time estimated DOA is uploaded through Bluetooth or a UART module. Experimental results in different places show the validity of the system and the deviation is no bigger than 6° in the presence of wind noise. PMID:24603636

  9. Design of small MEMS microphone array systems for direction finding of outdoors moving vehicles.

    PubMed

    Zhang, Xin; Huang, Jingchang; Song, Enliang; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing

    2014-03-05

    In this paper, a MEMS microphone array system scheme is proposed which implements real-time direction of arrival (DOA) estimation for moving vehicles. Wind noise is the primary source of unwanted noise on microphones outdoors. A multiple signal classification (MUSIC) algorithm is used in this paper for direction finding associated with spatial coherence to discriminate between the wind noise and the acoustic signals of a vehicle. The method is implemented in a SHARC DSP processor and the real-time estimated DOA is uploaded through Bluetooth or a UART module. Experimental results in different places show the validity of the system and the deviation is no bigger than 6° in the presence of wind noise.

  10. Deep-water measurements of container ship radiated noise signatures and directionality.

    PubMed

    Gassmann, Martin; Wiggins, Sean M; Hildebrand, John A

    2017-09-01

    Underwater radiated noise from merchant ships was measured opportunistically from multiple spatial aspects to estimate signature source levels and directionality. Transiting ships were tracked via the Automatic Identification System in a shipping lane while acoustic pressure was measured at the ships' keel and beam aspects. Port and starboard beam aspects were 15°, 30°, and 45° in compliance with ship noise measurements standards [ANSI/ASA S12.64 (2009) and ISO 17208-1 (2016)]. Additional recordings were made at a 10° starboard aspect. Source levels were derived with a spherical propagation (surface-affected) or a modified Lloyd's mirror model to account for interference from surface reflections (surface-corrected). Ship source depths were estimated from spectral differences between measurements at different beam aspects. Results were exemplified with a 4870 and a 10 036 twenty-foot equivalent unit container ship at 40%-56% and 87% of service speeds, respectively. For the larger ship, opportunistic ANSI/ISO broadband levels were 195 (surface-affected) and 209 (surface-corrected) dB re 1 μPa 2 1 m. Directionality at a propeller blade rate of 8 Hz exhibited asymmetries in stern-bow (<6 dB) and port-starboard (<9 dB) direction. Previously reported broadband levels at 10° aspect from McKenna, Ross, Wiggins, and Hildebrand [(2012b). J. Acoust. Soc. Am. 131, 92-103] may be ∼12 dB lower than respective surface-affected ANSI/ISO standard derived levels.

  11. Inventory of Data Sources for Estimating Health Care Costs in the United States

    PubMed Central

    Lund, Jennifer L.; Yabroff, K. Robin; Ibuka, Yoko; Russell, Louise B.; Barnett, Paul G.; Lipscomb, Joseph; Lawrence, William F.; Brown, Martin L.

    2011-01-01

    Objective To develop an inventory of data sources for estimating health care costs in the United States and provide information to aid researchers in identifying appropriate data sources for their specific research questions. Methods We identified data sources for estimating health care costs using 3 approaches: (1) a review of the 18 articles included in this supplement, (2) an evaluation of websites of federal government agencies, non profit foundations, and related societies that support health care research or provide health care services, and (3) a systematic review of the recently published literature. Descriptive information was abstracted from each data source, including sponsor, website, lowest level of data aggregation, type of data source, population included, cross-sectional or longitudinal data capture, source of diagnosis information, and cost of obtaining the data source. Details about the cost elements available in each data source were also abstracted. Results We identified 88 data sources that can be used to estimate health care costs in the United States. Most data sources were sponsored by government agencies, national or nationally representative, and cross-sectional. About 40% were surveys, followed by administrative or linked administrative data, fee or cost schedules, discharges, and other types of data. Diagnosis information was available in most data sources through procedure or diagnosis codes, self-report, registry, or chart review. Cost elements included inpatient hospitalizations (42.0%), physician and other outpatient services (45.5%), outpatient pharmacy or laboratory (28.4%), out-of-pocket (22.7%), patient time and other direct nonmedical costs (35.2%), and wages (13.6%). About half were freely available for downloading or available for a nominal fee, and the cost of obtaining the remaining data sources varied by the scope of the project. Conclusions Available data sources vary in population included, type of data source, scope, and accessibility, and have different strengths and weaknesses for specific research questions. PMID:19536009

  12. Watershed nitrogen and phosphorus balance: The upper Potomac River basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaworski, N.A.; Groffman, P.M.; Keller, A.A.

    1992-01-01

    Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less

  13. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    PubMed

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  14. Processing and interpretation of aeromagnetic data for the Santa Cruz Basin - Patagonia Mountains area, south-central Arizona

    USGS Publications Warehouse

    Phillips, Jeffrey D.

    2002-01-01

    In 1997, the U.S. Geological Survey (USGS) contracted with Sial Geosciences Inc. for a detailed aeromagnetic survey of the Santa Cruz basin and Patagonia Mountains area of south-central Arizona. The contractor's Operational Report is included as an Appendix in this report. This section describes the data processing performed by the USGS on the digital aeromagnetic data received from the contractor. This processing was required in order to remove flight line noise, estimate the depths to the magnetic sources, and estimate the locations of the magnetic contacts. Three methods were used for estimating source depths and contact locations: the horizontal gradient method, the analytic signal method, and the local wavenumber method. The depth estimates resulting from each method are compared, and the contact locations are combined into an interpretative map showing the dip direction for some contacts.

  15. Oxygen, Neon, and Iron X-Ray Absorption in the Local Interstellar Medium

    NASA Technical Reports Server (NTRS)

    Gatuzz, Efrain; Garcia, Javier; Kallman, Timothy R.; Mendoza, Claudio

    2016-01-01

    We present a detailed study of X-ray absorption in the local interstellar medium by analyzing the X-ray spectra of 24 galactic sources obtained with the Chandra High Energy Transmission Grating Spectrometer and the XMM-Newton Reflection Grating Spectrometer. Methods. By modeling the continuum with a simple broken power-law and by implementing the new ISMabs X-ray absorption model, we have estimated the total H, O, Ne, and Fe column densities towards the observed sources. Results. We have determined the absorbing material distribution as a function of source distance and galactic latitude longitude. Conclusions. Direct estimates of the fractions of neutrally, singly, and doubly ionized species of O, Ne, and Fe reveal the dominance of the cold component, thus indicating an overall low degree of ionization. Our results are expected to be sensitive to the model used to describe the continuum in all sources.

  16. An approach for estimating the magnetization direction of magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Li, Jinpeng; Zhang, Yingtang; Yin, Gang; Fan, Hongbo; Li, Zhining

    2017-02-01

    An approach for estimating the magnetization direction of magnetic anomalies in the presence of remanent magnetization through correlation between normalized source strength (NSS) and reduced-to-the-pole (RTP) is proposed. The observation region was divided into several calculation areas and the RTP field was transformed using different assumed values of the magnetization directions. Following this, the cross-correlation between NSS and RTP field was calculated, and it was found that the correct magnetization direction was that corresponding to the maximum cross-correlation value. The approach was tested on both simulated and real magnetic data. The results showed that the approach was effective in a variety of situations and considerably reduced the effect of remanent magnetization. Thus, the method using NSS and RTP is more effective compared to other methods such as using the total magnitude anomaly and RTP.

  17. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  18. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  19. Extended reactance domain algorithms for DoA estimation onto an ESPAR antennas

    NASA Astrophysics Data System (ADS)

    Harabi, F.; Akkar, S.; Gharsallah, A.

    2016-07-01

    Based on an extended reactance domain (RD) covariance matrix, this article proposes new alternatives for directions of arrival (DoAs) estimation of narrowband sources through an electronically steerable parasitic array radiator (ESPAR) antennas. Because of the centro symmetry of the classic ESPAR antennas, an unitary transformation is applied to the collected data that allow an important reduction in both computational cost and processing time and, also, an enhancement of the resolution capabilities of the proposed algorithms. Moreover, this article proposes a new approach for eigenvalues estimation through only some linear operations. The developed DoAs estimation algorithms based on this new approach has illustrated a good behaviour with less calculation cost and processing time as compared to other schemes based on the classic eigenvalues approach. The conducted simulations demonstrate that high-precision and high-resolution DoAs estimation can be reached especially in very closely sources situation and low sources power as compared to the RD-MUSIC algorithm and the RD-PM algorithm. The asymptotic behaviours of the proposed DoAs estimators are analysed in various scenarios and compared with the Cramer-Rao bound (CRB). The conducted simulations testify the high-resolution of the developed algorithms and prove the efficiently of the proposed approach.

  20. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  1. Assessment of the Accountability of Night Vision Devices Provided to the Security Forces of Iraq

    DTIC Science & Technology

    2009-03-17

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other... data in this project. The qualitative data consisted of individual interviews, direct observation, and written documents. Quantitative data

  2. Quantitative estimation of source complexity in tsunami-source inversion

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Cummins, Phil R.; Hawkins, Rhys; Jakir Hossen, M.

    2016-04-01

    This work analyses tsunami waveforms to infer the spatiotemporal evolution of sea-surface displacement (the tsunami source) caused by earthquakes or other sources. Since the method considers sea-surface displacement directly, no assumptions about the fault or seafloor deformation are required. While this approach has no ability to study seismic aspects of rupture, it greatly simplifies the tsunami source estimation, making it much less dependent on subjective fault and deformation assumptions. This results in a more accurate sea-surface displacement evolution in the source region. The spatial discretization is by wavelet decomposition represented by a trans-D Bayesian tree structure. Wavelet coefficients are sampled by a reversible jump algorithm and additional coefficients are only included when required by the data. Therefore, source complexity is consistent with data information (parsimonious) and the method can adapt locally in both time and space. Since the source complexity is unknown and locally adapts, no regularization is required, resulting in more meaningful displacement magnitudes. By estimating displacement uncertainties in a Bayesian framework we can study the effect of parametrization choice on the source estimate. Uncertainty arises from observation errors and limitations in the parametrization to fully explain the observations. As a result, parametrization choice is closely related to uncertainty estimation and profoundly affects inversion results. Therefore, parametrization selection should be included in the inference process. Our inversion method is based on Bayesian model selection, a process which includes the choice of parametrization in the inference process and makes it data driven. A trans-dimensional (trans-D) model for the spatio-temporal discretization is applied here to include model selection naturally and efficiently in the inference by sampling probabilistically over parameterizations. The trans-D process results in better uncertainty estimates since the parametrization adapts parsimoniously (in both time and space) according to the local data resolving power and the uncertainty about the parametrization choice is included in the uncertainty estimates. We apply the method to the tsunami waveforms recorded for the great 2011 Japan tsunami. All data are recorded on high-quality sensors (ocean-bottom pressure sensors, GPS gauges, and DART buoys). The sea-surface Green's functions are computed by JAGURS and include linear dispersion effects. By treating the noise level at each gauge as unknown, individual gauge contributions to the source estimate are appropriately and objectively weighted. The results show previously unreported detail of the source, quantify uncertainty spatially, and produce excellent data fits. The source estimate shows an elongated peak trench-ward from the hypo centre that closely follows the trench, indicating significant sea-floor deformation near the trench. Also notable is a bi-modal (negative to positive) displacement feature in the northern part of the source near the trench. The feature has ~2 m amplitude and is clearly resolved by the data with low uncertainties.

  3. Earthquake doublet that occurred in a pull-apart basin along the Sumatran fault and its seismotectonic implication

    NASA Astrophysics Data System (ADS)

    Nakano, M.; Kumagai, H.; Yamashina, T.; Inoue, H.; Toda, S.

    2007-12-01

    On March 6, 2007, an earthquake doublet occurred around Lake Singkarak, central Sumatra in Indonesia. An earthquake with magnitude (Mw) 6.4 at 03:49 is followed two hours later (05:49) by a similar-size event (Mw 6.3). Lake Singkarak is located between the Sianok and Sumani fault segments of the Sumatran fault system, and is a pull-apart basin formed at the segment boundary. We investigate source processes of the earthquakes using waveform data obtained from JISNET, which is a broad-band seismograph network in Indonesia. We first estimate the centroid source locations and focal mechanisms by the waveform inversion carried out in the frequency domain. Since stations are distributed almost linearly in the NW-SE direction coincident with the Sumatran fault strike direction, the estimated centroid locations are not well resolved especially in the direction orthogonal to the NW-SE direction. If we assume that these earthquakes occurred along the Sumatran fault, the first earthquake is located on the Sumani segment below Lake Singkarak and the second event is located at a few tens of kilometers north of the first event on the Sianok segment. The focal mechanisms of both events point to almost identical right-lateral strike-slip vertical faulting, which is consistent with the geometry of the Sumatran fault system. We next investigate the rupture initiation points using the particle motions of the P-waves of these earthquakes observed at station PPI, which is located about 20 km north of the Lake Singkarak. The initiation point of the first event is estimated in the north of the lake, which corresponds to the northern end of the Sumani segment. The initiation point of the second event is estimated at the southern end of the Sianok segment. The observed maximum amplitudes at stations located in the SE of the source region show larger amplitudes for the first event than those for the second one. On the other hand, the amplitudes at station BSI located in the NW of the source region show larger amplitude for the second event than that for the first one. Since the magnitudes, focal mechanisms, and source locations are almost identical for the two events, the larger amplitudes for the second event at BSI may be due to the effect of rupture directivity. Accordingly, we obtain the following image of source processes of the earthquake doublet: The first event initiated at the segment boundary and its rupture propagated along the Sumani segment to the SW direction. Then, the second event, which may be triggered by the first event, initiated at a location close to the hypocenter of the first event, but its rupture propagated along the Sianok segment to the NE direction, opposite to the first event. It is known that the previous significant seismic activity along the Sianok and Sumani segments occurred in 1926, which was also an earthquake doublet with similar magnitudes to those in 2007. If we assume that the time interval between the earthquake doublets in 1926 and 2007 represents the average recurrence interval and that typical slip in the individual earthquakes is 1 m, we obtain approximately 1 cm/year for a slip rate of the fault segments. Geological features indicate that Lake Singkrak is no more than a few million years old (Sieh and Natawidjaja, 2000, JGR). If the pull-apart basin has been created since a few million years ago with the estimated slip rate of the segments, we obtain roughly 20 km of the total offset on the Sianok and Sumani segments, which is consistent with the observed offset. Our study supports the model of Sieh and Natawidjaja (2000) that the basin continues to be created by dextral slip on the en echelon Sumani and Sianok segments.

  4. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  5. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  6. Comparative assessment of the global fate and transport pathways of long-chain perfluorocarboxylic acids (PFCAs) and perfluorocarboxylates (PFCs) emitted from direct sources.

    PubMed

    Armitage, James M; Macleod, Matthew; Cousins, Ian T

    2009-08-01

    A global-scale multispecies mass balance model was used to simulate the long-term fate and transport of perfluorocarboxylic acids (PFCAs) with eight to thirteen carbons (C8-C13) and their conjugate bases, the perfluorocarboxylates (PFCs). The main purpose of this study was to assess the relative long-range transport (LRT) potential of each conjugate pair, collectively termed PFC(A)s, considering emissions from direct sources (i.e., manufacturing and use) only. Overall LRT potential (atmospheric + oceanic) varied as a function of chain length and depended on assumptions regarding pKa and mode of entry. Atmospheric transport makes a relatively higher contribution to overall LRT potential for PFC(A)s with longer chain length, which reflects the increasing trend in the air-water partition coefficient (K(AW)) of the neutral PFCA species with chain length. Model scenarios using estimated direct emissions of the C8, C9, and C11 PFC(A)s indicate that the mass fluxes to the Arctic marine environment associated with oceanic transport are in excess of mass fluxes from indirect sources (i.e., atmospheric transport of precursor substances such as fluorotelomer alcohols and subsequent degradation to PFCAs). Modeled concentrations of C8 and C9 in the abiotic environment are broadly consistent with available monitoring data in surface ocean waters. Furthermore, the modeled concentration ratios of C8 to C9 are reconcilable with the homologue pattern frequently observed in biota, assuming a positive correlation between bioaccumulation potential and chain length. Modeled concentration ratios of C11 to C10 are more difficult to reconcile with monitoring data in both source and remote regions. Our model results for C11 and C10 therefore imply that either (i) indirect sources are dominant or (ii) estimates of direct emission are not accurate for these homologues.

  7. Anthropogenic combustion iron as a complex climate forcer.

    PubMed

    Matsui, Hitoshi; Mahowald, Natalie M; Moteki, Nobuhiro; Hamilton, Douglas S; Ohata, Sho; Yoshida, Atsushi; Koike, Makoto; Scanza, Rachel A; Flanner, Mark G

    2018-04-23

    Atmospheric iron affects the global carbon cycle by modulating ocean biogeochemistry through the deposition of soluble iron to the ocean. Iron emitted by anthropogenic (fossil fuel) combustion is a source of soluble iron that is currently considered less important than other soluble iron sources, such as mineral dust and biomass burning. Here we show that the atmospheric burden of anthropogenic combustion iron is 8 times greater than previous estimates by incorporating recent measurements of anthropogenic magnetite into a global aerosol model. This new estimation increases the total deposition flux of soluble iron to southern oceans (30-90 °S) by 52%, with a larger contribution of anthropogenic combustion iron than dust and biomass burning sources. The direct radiative forcing of anthropogenic magnetite is estimated to be 0.021 W m -2 globally and 0.22 W m -2 over East Asia. Our results demonstrate that anthropogenic combustion iron is a larger and more complex climate forcer than previously thought, and therefore plays a key role in the Earth system.

  8. Estimator banks: a new tool for direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Gershman, Alex B.; Boehme, Johann F.

    1997-10-01

    A new powerful tool for improving the threshold performance of direction-of-arrival (DOA) estimation is considered. The essence of our approach is to reduce the number of outliers in the threshold domain using the so-called estimator bank containing multiple 'parallel' underlying DOA estimators which are based on pseudorandom resampling of the MUSIC spatial spectrum for given data batch or sample covariance matrix. To improve the threshold performance relative to conventional MUSIC, evolutionary principles are used, i.e., only 'successful' underlying estimators (having no failure in the preliminary estimated source localization sectors) are exploited in the final estimate. An efficient beamspace root implementation of the estimator bank approach is developed, combined with the array interpolation technique which enables the application to arbitrary arrays. A higher-order extension of our approach is also presented, where the cumulant-based MUSIC estimator is exploited as a basic technique for spatial spectrum resampling. Simulations and experimental data processing show that our algorithm performs well below the MUSIC threshold, namely, has the threshold performance similar to that of the stochastic ML method. At the same time, the computational cost of our algorithm is much lower than that of stochastic ML because no multidimensional optimization is involved.

  9. Measuring cosmic shear and birefringence using resolved radio sources

    NASA Astrophysics Data System (ADS)

    Whittaker, Lee; Battye, Richard A.; Brown, Michael L.

    2018-02-01

    We develop a new method of extracting simultaneous measurements of weak lensing shear and a local rotation of the plane of polarization using observations of resolved radio sources. The basis of the method is an assumption that the direction of the polarization is statistically linked with that of the gradient of the total intensity field. Using a number of sources spread over the sky, this method will allow constraints to be placed on cosmic shear and birefringence, and it can be applied to any resolved radio sources for which such a correlation exists. Assuming that the rotation and shear are constant across the source, we use this relationship to construct a quadratic estimator and investigate its properties using simulated observations. We develop a calibration scheme using simulations based on the observed images to mitigate a bias which occurs in the presence of measurement errors and an astrophysical scatter on the polarization. The method is applied directly to archival data of radio galaxies where we measure a mean rotation signal of $\\omega=-2.02^{\\circ}\\pm0.75^{\\circ}$ and an average shear compatible with zero using 30 reliable sources. This level of constraint on an overall rotation is comparable with current leading constraints from CMB experiments and is expected to increase by at least an order of magnitude with future high precision radio surveys, such as those performed by the SKA. We also measure the shear and rotation two-point correlation functions and estimate the number of sources required to detect shear and rotation correlations in future surveys.

  10. Data Sources for Prioritizing Human Exposure to Chemicals

    EPA Science Inventory

    Humans may be exposed to thousands of chemicals through contact in the workplace, home, and via air, water, food, and soil. A major challenge is estimating chemical exposures, which requires understanding potential exposure pathways directly related to how chemicals are used. Wit...

  11. Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,

    DTIC Science & Technology

    The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been

  12. A Direction Finding Method with A 3-D Array Based on Aperture Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Shiwen; Chen, Liangbing; Gao, Zhaozhao; Ma, Wenfeng

    2018-01-01

    Direction finding for electronic warfare application should provide a wider field of view as possible. But the maximum unambiguous field of view for conventional direction finding methods is a hemisphere. It cannot distinguish the direction of arrival of the signals from the back lobe of the array. In this paper, a full 3-D direction finding method based on aperture synthesis radiometry is proposed. The model of the direction finding system is illustrated, and the fundamentals are presented. The relationship between the outputs of the measurements of a 3-D array and the 3-D power distribution of the point sources can be represented by a 3-D Fourier transform, and then the 3-D power distribution of the point sources can be reconstructed by an inverse 3-D Fourier transform. And in order to display the 3-D power distribution of the point sources conveniently, the whole spherical distribution is represented by two 2-D circular distribution images, one of which is for the upper hemisphere, and the other is for the lower hemisphere. Then a numeric simulation is designed and conducted to demonstrate the feasibility of the method. The results show that the method can estimate the arbitrary direction of arrival of the signals in the 3-D space correctly.

  13. Direct medical cost of overweight and obesity in the United States: a quantitative systematic review

    PubMed Central

    Tsai, Adam Gilden; Williamson, David F.; Glick, Henry A.

    2010-01-01

    Objectives To estimate per-person and aggregate direct medical costs of overweight and obesity and to examine the effect of study design factors. Methods PubMed (1968–2009), EconLit (1969–2009), and Business Source Premier (1995–2009) were searched for original studies. Results were standardized to compute the incremental cost per overweight person and per obese person, and to compute the national aggregate cost. Results A total of 33 U.S. studies met review criteria. Among the 4 highest quality studies, the 2008 per-person direct medical cost of overweight was $266 and of obesity was $1723. The aggregate national cost of overweight and obesity combined was $113.9 billion. Study design factors that affected cost estimate included: use of national samples versus more selected populations; age groups examined; inclusion of all medical costs versus obesity-related costs only; and BMI cutoffs for defining overweight and obesity. Conclusions Depending on the source of total national health care expenditures used, the direct medical cost of overweight and obesity combined is approximately 5.0% to 10% of U.S. health care spending. Future studies should include nationally representative samples, evaluate adults of all ages, report all medical costs, and use standard BMI cutoffs. PMID:20059703

  14. Exploring the observational constraints on the simulation of brown carbon

    NASA Astrophysics Data System (ADS)

    Wang, X.; Heald, C. L.; Liu, J.; Weber, R. J.; Campuzano-Jost, P.; Jimenez, J. L.; Schwarz, J. P.; Perring, A. E.

    2017-12-01

    Brown carbon (BrC) is the component of organic aerosols (OA) which strongly absorbs solar radiation in the near-UV range of the spectrum. However the sources, evolution, and optical properties of BrC remain highly uncertain, and therefore constitute a large source of uncertainty in estimating the global direct radiative effect (DRE) of aerosols. Previous modeling studies of BrC optical properties and DRE have been unable to fully evaluate the skill of their simulations, given the lack of direct measurements of organic aerosol absorption. In this study, we develop a global model simulation (GEOS-Chem) of BrC and test it against BrC absorption measurements from two aircraft campaigns in the U.S. (SEAC4RS and DC3). To our knowledge, this is the first study to compare simulated BrC absorption with direct, continuous ambient measurements. We show that the laboratory-based BrC absorption properties from biomass burning overestimate the aircraft measurements of ambient BrC. In addition, applying a photochemical whitening scheme to simulated BrC is better able to represent the observed BrC absorption. These observations are consistent with a mass absorption coefficient (MAC) of freshly emitted biomass burning OA of 0.57m2g-1. Using the RRTMG model integrated with GEOS-Chem, we estimate that the all-sky top-of-atmosphere direct radiative effect (DRE) of OA is -0.350 Wm-2, 10% higher than that without consideration of BrC absorption. Therefore, our best estimate of the absorption DRE of BrC is +0.042 Wm-2. We suggest that the DRE of BrC has been overestimated previously due to the lack of observational constraints from direct measurements as well as neglect of the effects of photochemical whitening.

  15. Direction dependent Love and Rayleigh wave noise characteristics using multiple arrays across Europe

    NASA Astrophysics Data System (ADS)

    Juretzek, Carina; Perleth, Magdalena; Hadziioannou, Celine

    2016-04-01

    Seismic noise has become an important signal source for tomography and monitoring purposes. Better understanding of the noise field characteristics is crucial to further improve noise applications. Our knowledge about common and different origins of Love and Rayleigh waves in the microseism band is still limited. This applies in particular for constraints on source locations and source mechanisms of Love waves. Here, 3-component beamforming is used to distinguish between the different polarized wave types in the primary and secondary microseism noise field recorded at several arrays across Europe. We compare characteristics of Love and Rayleigh wave noise, such as source directions and frequency content. Further, Love to Rayleigh wave ratios are measured and a dependence on direction is found, especially in the primary microseism band. Estimates of the kinetic energy density ratios propose a dominance of coherent Love waves in the primary, but not in the secondary microseism band. The seasonality of the noise field characteristics is examined by using a full year of data in 2013 and is found to be stable.

  16. Azimuthally Differential Pion Femtoscopy in Pb-Pb Collisions at √{sN N }=2.76 TeV

    NASA Astrophysics Data System (ADS)

    Adamová, D.; Aggarwal, M. M.; Aglieri Rinella, G.; Agnello, M.; Agrawal, N.; Ahammed, Z.; Ahmad, S.; Ahn, S. U.; Aiola, S.; Akindinov, A.; Alam, S. N.; Albuquerque, D. S. D.; Aleksandrov, D.; Alessandro, B.; Alexandre, D.; Alfaro Molina, R.; Alici, A.; Alkin, A.; Alme, J.; Alt, T.; Altinpinar, S.; Altsybeev, I.; Alves Garcia Prado, C.; An, M.; Andrei, C.; Andrews, H. A.; Andronic, A.; Anguelov, V.; Anson, C.; Antičić, T.; Antinori, F.; Antonioli, P.; Anwar, R.; Aphecetche, L.; Appelshäuser, H.; Arcelli, S.; Arnaldi, R.; Arnold, O. W.; Arsene, I. C.; Arslandok, M.; Audurier, B.; Augustinus, A.; Averbeck, R.; Azmi, M. D.; Badalà, A.; Baek, Y. W.; Bagnasco, S.; Bailhache, R.; Bala, R.; Baldisseri, A.; Ball, M.; Baral, R. C.; Barbano, A. M.; Barbera, R.; Barile, F.; Barioglio, L.; Barnaföldi, G. G.; Barnby, L. S.; Barret, V.; Bartalini, P.; Barth, K.; Bartke, J.; Bartsch, E.; Basile, M.; Bastid, N.; Basu, S.; Bathen, B.; Batigne, G.; Batista Camejo, A.; Batyunya, B.; Batzing, P. C.; Bearden, I. G.; Beck, H.; Bedda, C.; Behera, N. K.; Belikov, I.; Bellini, F.; Bello Martinez, H.; Bellwied, R.; Beltran, L. G. E.; Belyaev, V.; Bencedi, G.; Beole, S.; Bercuci, A.; Berdnikov, Y.; Berenyi, D.; Bertens, R. A.; Berzano, D.; Betev, L.; Bhasin, A.; Bhat, I. R.; Bhati, A. K.; Bhattacharjee, B.; Bhom, J.; Bianchi, L.; Bianchi, N.; Bianchin, C.; Bielčík, J.; Bielčíková, J.; Bilandzic, A.; Biro, G.; Biswas, R.; Biswas, S.; Blair, J. T.; Blau, D.; Blume, C.; Boca, G.; Bock, F.; Bogdanov, A.; Boldizsár, L.; Bombara, M.; Bonomi, G.; Bonora, M.; Book, J.; Borel, H.; Borissov, A.; Borri, M.; Botta, E.; Bourjau, C.; Braun-Munzinger, P.; Bregant, M.; Broker, T. A.; Browning, T. A.; Broz, M.; Brucken, E. J.; Bruna, E.; Bruno, G. E.; Budnikov, D.; Buesching, H.; Bufalino, S.; Buhler, P.; Buitron, S. A. I.; Buncic, P.; Busch, O.; Buthelezi, Z.; Butt, J. B.; Buxton, J. T.; Cabala, J.; Caffarri, D.; Caines, H.; Caliva, A.; Calvo Villar, E.; Camerini, P.; Capon, A. A.; Carena, F.; Carena, W.; Carnesecchi, F.; Castillo Castellanos, J.; Castro, A. J.; Casula, E. A. R.; Ceballos Sanchez, C.; Cerello, P.; Chang, B.; Chapeland, S.; Chartier, M.; Charvet, J. L.; Chattopadhyay, S.; Chattopadhyay, S.; Chauvin, A.; Cherney, M.; Cheshkov, C.; Cheynis, B.; Chibante Barroso, V.; Chinellato, D. D.; Cho, S.; Chochula, P.; Choi, K.; Chojnacki, M.; Choudhury, S.; Christakoglou, P.; Christensen, C. H.; Christiansen, P.; Chujo, T.; Chung, S. U.; Cicalo, C.; Cifarelli, L.; Cindolo, F.; Cleymans, J.; Colamaria, F.; Colella, D.; Collu, A.; Colocci, M.; Conesa Balbastre, G.; Conesa Del Valle, Z.; Connors, M. E.; Contreras, J. G.; Cormier, T. M.; Corrales Morales, Y.; Cortés Maldonado, I.; Cortese, P.; Cosentino, M. R.; Costa, F.; Costanza, S.; Crkovská, J.; Crochet, P.; Cuautle, E.; Cunqueiro, L.; Dahms, T.; Dainese, A.; Danisch, M. C.; Danu, A.; Das, D.; Das, I.; Das, S.; Dash, A.; Dash, S.; de, S.; de Caro, A.; de Cataldo, G.; de Conti, C.; de Cuveland, J.; de Falco, A.; de Gruttola, D.; De Marco, N.; de Pasquale, S.; de Souza, R. D.; Degenhardt, H. F.; Deisting, A.; Deloff, A.; Deplano, C.; Dhankher, P.; di Bari, D.; di Mauro, A.; di Nezza, P.; di Ruzza, B.; Diaz Corchero, M. A.; Dietel, T.; Dillenseger, P.; Divià, R.; Djuvsland, Ø.; Dobrin, A.; Domenicis Gimenez, D.; Dönigus, B.; Dordic, O.; Drozhzhova, T.; Dubey, A. K.; Dubla, A.; Ducroux, L.; Duggal, A. K.; Dupieux, P.; Ehlers, R. J.; Elia, D.; Endress, E.; Engel, H.; Epple, E.; Erazmus, B.; Erhardt, F.; Espagnon, B.; Esumi, S.; Eulisse, G.; Eum, J.; Evans, D.; Evdokimov, S.; Fabbietti, L.; Fabris, D.; Faivre, J.; Fantoni, A.; Fasel, M.; Feldkamp, L.; Feliciello, A.; Feofilov, G.; Ferencei, J.; Fernández Téllez, A.; Ferreiro, E. G.; Ferretti, A.; Festanti, A.; Feuillard, V. J. G.; Figiel, J.; Figueredo, M. A. S.; Filchagin, S.; Finogeev, D.; Fionda, F. M.; Fiore, E. M.; Floris, M.; Foertsch, S.; Foka, P.; Fokin, S.; Fragiacomo, E.; Francescon, A.; Francisco, A.; Frankenfeld, U.; Fronze, G. G.; Fuchs, U.; Furget, C.; Furs, A.; Fusco Girard, M.; Gaardhøje, J. J.; Gagliardi, M.; Gago, A. M.; Gajdosova, K.; Gallio, M.; Galvan, C. D.; Gangadharan, D. R.; Ganoti, P.; Gao, C.; Garabatos, C.; Garcia-Solis, E.; Garg, K.; Garg, P.; Gargiulo, C.; Gasik, P.; Gauger, E. F.; Gay Ducati, M. B.; Germain, M.; Ghosh, P.; Ghosh, S. K.; Gianotti, P.; Giubellino, P.; Giubilato, P.; Gladysz-Dziadus, E.; Glässel, P.; Goméz Coral, D. M.; Gomez Ramirez, A.; Gonzalez, A. S.; Gonzalez, V.; González-Zamora, P.; Gorbunov, S.; Görlich, L.; Gotovac, S.; Grabski, V.; Graczykowski, L. K.; Graham, K. L.; Gramling, J. L.; Greiner, L.; Grelli, A.; Grigoras, C.; Grigoriev, V.; Grigoryan, A.; Grigoryan, S.; Grion, N.; Gronefeld, J. M.; Grosa, F.; Grosse-Oetringhaus, J. F.; Grosso, R.; Gruber, L.; Grull, F. R.; Guber, F.; Guernane, R.; Guerzoni, B.; Gulbrandsen, K.; Gunji, T.; Gupta, A.; Gupta, R.; Guzman, I. B.; Haake, R.; Hadjidakis, C.; Hamagaki, H.; Hamar, G.; Hamon, J. C.; Harris, J. W.; Harton, A.; Hatzifotiadou, D.; Hayashi, S.; Heckel, S. T.; Hellbär, E.; Helstrup, H.; Herghelegiu, A.; Herrera Corral, G.; Herrmann, F.; Hess, B. A.; Hetland, K. F.; Hillemanns, H.; Hippolyte, B.; Hladky, J.; Horak, D.; Hosokawa, R.; Hristov, P.; Hughes, C.; Humanic, T. J.; Hussain, N.; Hussain, T.; Hutter, D.; Hwang, D. S.; Ilkaev, R.; Inaba, M.; Ippolitov, M.; Irfan, M.; Isakov, V.; Islam, M. S.; Ivanov, M.; Ivanov, V.; Izucheev, V.; Jacak, B.; Jacazio, N.; Jacobs, P. M.; Jadhav, M. B.; Jadlovska, S.; Jadlovsky, J.; Jahnke, C.; Jakubowska, M. J.; Janik, M. A.; Jayarathna, P. H. S. Y.; Jena, C.; Jena, S.; Jercic, M.; Jimenez Bustamante, R. T.; Jones, P. G.; Jusko, A.; Kalinak, P.; Kalweit, A.; Kang, J. H.; Kaplin, V.; Kar, S.; Karasu Uysal, A.; Karavichev, O.; Karavicheva, T.; Karayan, L.; Karpechev, E.; Kebschull, U.; Keidel, R.; Keijdener, D. L. D.; Keil, M.; Ketzer, B.; Mohisin Khan, M.; Khan, P.; Khan, S. A.; Khanzadeev, A.; Kharlov, Y.; Khatun, A.; Khuntia, A.; Kielbowicz, M. M.; Kileng, B.; Kim, D. W.; Kim, D. J.; Kim, D.; Kim, H.; Kim, J. S.; Kim, J.; Kim, M.; Kim, M.; Kim, S.; Kim, T.; Kirsch, S.; Kisel, I.; Kiselev, S.; Kisiel, A.; Kiss, G.; Klay, J. L.; Klein, C.; Klein, J.; Klein-Bösing, C.; Klewin, S.; Kluge, A.; Knichel, M. L.; Knospe, A. G.; Kobdaj, C.; Kofarago, M.; Kollegger, T.; Kolojvari, A.; Kondratiev, V.; Kondratyeva, N.; Kondratyuk, E.; Konevskikh, A.; Kopcik, M.; Kour, M.; Kouzinopoulos, C.; Kovalenko, O.; Kovalenko, V.; Kowalski, M.; Koyithatta Meethaleveedu, G.; Králik, I.; Kravčáková, A.; Krivda, M.; Krizek, F.; Kryshen, E.; Krzewicki, M.; Kubera, A. M.; Kučera, V.; Kuhn, C.; Kuijer, P. G.; Kumar, A.; Kumar, J.; Kumar, L.; Kumar, S.; Kundu, S.; Kurashvili, P.; Kurepin, A.; Kurepin, A. B.; Kuryakin, A.; Kushpil, S.; Kweon, M. J.; Kwon, Y.; La Pointe, S. L.; La Rocca, P.; Lagana Fernandes, C.; Lakomov, I.; Langoy, R.; Lapidus, K.; Lara, C.; Lardeux, A.; Lattuca, A.; Laudi, E.; Lavicka, R.; Lazaridis, L.; Lea, R.; Leardini, L.; Lee, S.; Lehas, F.; Lehner, S.; Lehrbach, J.; Lemmon, R. C.; Lenti, V.; Leogrande, E.; León Monzón, I.; Lévai, P.; Li, S.; Li, X.; Lien, J.; Lietava, R.; Lindal, S.; Lindenstruth, V.; Lippmann, C.; Lisa, M. A.; Litichevskyi, V.; Ljunggren, H. M.; Llope, W. J.; Lodato, D. F.; Loggins, V. R.; Loenne, P. I.; Loginov, V.; Loizides, C.; Loncar, P.; Lopez, X.; López Torres, E.; Lowe, A.; Luettig, P.; Lunardon, M.; Luparello, G.; Lupi, M.; Lutz, T. H.; Maevskaya, A.; Mager, M.; Mahajan, S.; Mahmood, S. M.; Maire, A.; Majka, R. D.; Malaev, M.; Maldonado Cervantes, I.; Malinina, L.; Mal'Kevich, D.; Malzacher, P.; Mamonov, A.; Manko, V.; Manso, F.; Manzari, V.; Mao, Y.; Marchisone, M.; Mareš, J.; Margagliotti, G. V.; Margotti, A.; Margutti, J.; Marín, A.; Markert, C.; Marquard, M.; Martin, N. A.; Martinengo, P.; Martinez, J. A. L.; Martínez, M. I.; Martínez García, G.; Martinez Pedreira, M.; Mas, A.; Masciocchi, S.; Masera, M.; Masoni, A.; Mastroserio, A.; Mathis, A. M.; Matyja, A.; Mayer, C.; Mazer, J.; Mazzilli, M.; Mazzoni, M. A.; Meddi, F.; Melikyan, Y.; Menchaca-Rocha, A.; Meninno, E.; Mercado Pérez, J.; Meres, M.; Mhlanga, S.; Miake, Y.; Mieskolainen, M. M.; Mihaylov, D.; Mikhaylov, K.; Milano, L.; Milosevic, J.; Mischke, A.; Mishra, A. N.; Miśkowiec, D.; Mitra, J.; Mitu, C. M.; Mohammadi, N.; Mohanty, B.; Montes, E.; Moreira de Godoy, D. A.; Moreno, L. A. P.; Moretto, S.; Morreale, A.; Morsch, A.; Muccifora, V.; Mudnic, E.; Mühlheim, D.; Muhuri, S.; Mukherjee, M.; Mulligan, J. D.; Munhoz, M. G.; Münning, K.; Munzer, R. H.; Murakami, H.; Murray, S.; Musa, L.; Musinsky, J.; Myers, C. J.; Naik, B.; Nair, R.; Nandi, B. K.; Nania, R.; Nappi, E.; Naru, M. U.; Natal da Luz, H.; Nattrass, C.; Navarro, S. R.; Nayak, K.; Nayak, R.; Nayak, T. K.; Nazarenko, S.; Nedosekin, A.; Negrao de Oliveira, R. A.; Nellen, L.; Nesbo, S. V.; Ng, F.; Nicassio, M.; Niculescu, M.; Niedziela, J.; Nielsen, B. S.; Nikolaev, S.; Nikulin, S.; Nikulin, V.; Noferini, F.; Nomokonov, P.; Nooren, G.; Noris, J. C. C.; Norman, J.; Nyanin, A.; Nystrand, J.; Oeschler, H.; Oh, S.; Ohlson, A.; Okubo, T.; Olah, L.; Oleniacz, J.; Oliveira da Silva, A. C.; Oliver, M. H.; Onderwaater, J.; Oppedisano, C.; Orava, R.; Oravec, M.; Ortiz Velasquez, A.; Oskarsson, A.; Otwinowski, J.; Oyama, K.; Ozdemir, M.; Pachmayer, Y.; Pacik, V.; Pagano, D.; Pagano, P.; Paić, G.; Pal, S. K.; Palni, P.; Pan, J.; Pandey, A. K.; Panebianco, S.; Papikyan, V.; Pappalardo, G. S.; Pareek, P.; Park, J.; Park, W. J.; Parmar, S.; Passfeld, A.; Pathak, S. P.; Paticchio, V.; Patra, R. N.; Paul, B.; Pei, H.; Peitzmann, T.; Peng, X.; Pereira, L. G.; Pereira da Costa, H.; Peresunko, D.; Perez Lezama, E.; Peskov, V.; Pestov, Y.; Petráček, V.; Petrov, V.; Petrovici, M.; Petta, C.; Pezzi, R. P.; Piano, S.; Pikna, M.; Pillot, P.; Pimentel, L. O. D. L.; Pinazza, O.; Pinsky, L.; Piyarathna, D. B.; Płoskoń, M.; Planinic, M.; Pluta, J.; Pochybova, S.; Podesta-Lerma, P. L. M.; Poghosyan, M. G.; Polichtchouk, B.; Poljak, N.; Poonsawat, W.; Pop, A.; Poppenborg, H.; Porteboeuf-Houssais, S.; Porter, J.; Pospisil, J.; Pozdniakov, V.; Prasad, S. K.; Preghenella, R.; Prino, F.; Pruneau, C. A.; Pshenichnov, I.; Puccio, M.; Puddu, G.; Pujahari, P.; Punin, V.; Putschke, J.; Qvigstad, H.; Rachevski, A.; Raha, S.; Rajput, S.; Rak, J.; Rakotozafindrabe, A.; Ramello, L.; Rami, F.; Rana, D. B.; Raniwala, R.; Raniwala, S.; Räsänen, S. S.; Rascanu, B. T.; Rathee, D.; Ratza, V.; Ravasenga, I.; Read, K. F.; Redlich, K.; Rehman, A.; Reichelt, P.; Reidt, F.; Ren, X.; Renfordt, R.; Reolon, A. R.; Reshetin, A.; Reygers, K.; Riabov, V.; Ricci, R. A.; Richert, T.; Richter, M.; Riedler, P.; Riegler, W.; Riggi, F.; Ristea, C.; Rodríguez Cahuantzi, M.; Røed, K.; Rogochaya, E.; Rohr, D.; Röhrich, D.; Rokita, P. S.; Ronchetti, F.; Ronflette, L.; Rosnet, P.; Rossi, A.; Rotondi, A.; Roukoutakis, F.; Roy, A.; Roy, C.; Roy, P.; Rubio Montero, A. J.; Rui, R.; Russo, R.; Rustamov, A.; Ryabinkin, E.; Ryabov, Y.; Rybicki, A.; Saarinen, S.; Sadhu, S.; Sadovsky, S.; Šafařík, K.; Saha, S. K.; Sahlmuller, B.; Sahoo, B.; Sahoo, P.; Sahoo, R.; Sahoo, S.; Sahu, P. K.; Saini, J.; Sakai, S.; Saleh, M. A.; Salzwedel, J.; Sambyal, S.; Samsonov, V.; Sandoval, A.; Sarkar, D.; Sarkar, N.; Sarma, P.; Sas, M. H. P.; Scapparone, E.; Scarlassara, F.; Scharenberg, R. P.; Scheid, H. S.; Schiaua, C.; Schicker, R.; Schmidt, C.; Schmidt, H. R.; Schmidt, M. O.; Schmidt, M.; Schukraft, J.; Schutz, Y.; Schwarz, K.; Schweda, K.; Scioli, G.; Scomparin, E.; Scott, R.; Šefčík, M.; Seger, J. E.; Sekiguchi, Y.; Sekihata, D.; Selyuzhenkov, I.; Senosi, K.; Senyukov, S.; Serradilla, E.; Sett, P.; Sevcenco, A.; Shabanov, A.; Shabetai, A.; Shadura, O.; Shahoyan, R.; Shangaraev, A.; Sharma, A.; Sharma, A.; Sharma, M.; Sharma, M.; Sharma, N.; Sheikh, A. I.; Shigaki, K.; Shou, Q.; Shtejer, K.; Sibiriak, Y.; Siddhanta, S.; Sielewicz, K. M.; Siemiarczuk, T.; Silvermyr, D.; Silvestre, C.; Simatovic, G.; Simonetti, G.; Singaraju, R.; Singh, R.; Singhal, V.; Sinha, T.; Sitar, B.; Sitta, M.; Skaali, T. B.; Slupecki, M.; Smirnov, N.; Snellings, R. J. M.; Snellman, T. W.; Song, J.; Song, M.; Soramel, F.; Sorensen, S.; Sozzi, F.; Spiriti, E.; Sputowska, I.; Srivastava, B. K.; Stachel, J.; Stan, I.; Stankus, P.; Stenlund, E.; Stiller, J. H.; Stocco, D.; Strmen, P.; Suaide, A. A. P.; Sugitate, T.; Suire, C.; Suleymanov, M.; Suljic, M.; Sultanov, R.; Šumbera, M.; Sumowidagdo, S.; Suzuki, K.; Swain, S.; Szabo, A.; Szarka, I.; Szczepankiewicz, A.; Szymanski, M.; Tabassam, U.; Takahashi, J.; Tambave, G. J.; Tanaka, N.; Tarhini, M.; Tariq, M.; Tarzila, M. G.; Tauro, A.; Tejeda Muñoz, G.; Telesca, A.; Terasaki, K.; Terrevoli, C.; Teyssier, B.; Thakur, D.; Thakur, S.; Thomas, D.; Tieulent, R.; Tikhonov, A.; Timmins, A. R.; Toia, A.; Tripathy, S.; Trogolo, S.; Trombetta, G.; Trubnikov, V.; Trzaska, W. H.; Trzeciak, B. A.; Tsuji, T.; Tumkin, A.; Turrisi, R.; Tveter, T. S.; Ullaland, K.; Umaka, E. N.; Uras, A.; Usai, G. L.; Utrobicic, A.; Vala, M.; van der Maarel, J.; van Hoorne, J. W.; van Leeuwen, M.; Vanat, T.; Vande Vyvre, P.; Varga, D.; Vargas, A.; Vargyas, M.; Varma, R.; Vasileiou, M.; Vasiliev, A.; Vauthier, A.; Vázquez Doce, O.; Vechernin, V.; Veen, A. M.; Velure, A.; Vercellin, E.; Vergara Limón, S.; Vernet, R.; Vértesi, R.; Vickovic, L.; Vigolo, S.; Viinikainen, J.; Vilakazi, Z.; Villalobos Baillie, O.; Villatoro Tello, A.; Vinogradov, A.; Vinogradov, L.; Virgili, T.; Vislavicius, V.; Vodopyanov, A.; Völkl, M. A.; Voloshin, K.; Voloshin, S. A.; Volpe, G.; von Haller, B.; Vorobyev, I.; Voscek, D.; Vranic, D.; Vrláková, J.; Wagner, B.; Wagner, J.; Wang, H.; Wang, M.; Watanabe, D.; Watanabe, Y.; Weber, M.; Weber, S. G.; Weiser, D. F.; Wessels, J. P.; Westerhoff, U.; Whitehead, A. M.; Wiechula, J.; Wikne, J.; Wilk, G.; Wilkinson, J.; Willems, G. A.; Williams, M. C. S.; Windelband, B.; Witt, W. E.; Yalcin, S.; Yang, P.; Yano, S.; Yin, Z.; Yokoyama, H.; Yoo, I.-K.; Yoon, J. H.; Yurchenko, V.; Zaccolo, V.; Zaman, A.; Zampolli, C.; Zanoli, H. J. C.; Zaporozhets, S.; Zardoshti, N.; Zarochentsev, A.; Závada, P.; Zaviyalov, N.; Zbroszczyk, H.; Zhalov, M.; Zhang, H.; Zhang, X.; Zhang, Y.; Zhang, C.; Zhang, Z.; Zhao, C.; Zhigareva, N.; Zhou, D.; Zhou, Y.; Zhou, Z.; Zhu, H.; Zhu, J.; Zhu, X.; Zichichi, A.; Zimmermann, A.; Zimmermann, M. B.; Zimmermann, S.; Zinovjev, G.; Zmeskal, J.; Alice Collaboration

    2017-06-01

    We present the first azimuthally differential measurements of the pion source size relative to the second harmonic event plane in Pb-Pb collisions at a center-of-mass energy per nucleon-nucleon pair of √{sN N }=2.76 TeV . The measurements have been performed in the centrality range 0%-50% and for pion pair transverse momenta 0.2

  17. Azimuthally Differential Pion Femtoscopy in Pb-Pb Collisions at sqrt[s_{NN}]=2.76  TeV.

    PubMed

    Adamová, D; Aggarwal, M M; Aglieri Rinella, G; Agnello, M; Agrawal, N; Ahammed, Z; Ahmad, S; Ahn, S U; Aiola, S; Akindinov, A; Alam, S N; Albuquerque, D S D; Aleksandrov, D; Alessandro, B; Alexandre, D; Alfaro Molina, R; Alici, A; Alkin, A; Alme, J; Alt, T; Altinpinar, S; Altsybeev, I; Alves Garcia Prado, C; An, M; Andrei, C; Andrews, H A; Andronic, A; Anguelov, V; Anson, C; Antičić, T; Antinori, F; Antonioli, P; Anwar, R; Aphecetche, L; Appelshäuser, H; Arcelli, S; Arnaldi, R; Arnold, O W; Arsene, I C; Arslandok, M; Audurier, B; Augustinus, A; Averbeck, R; Azmi, M D; Badalà, A; Baek, Y W; Bagnasco, S; Bailhache, R; Bala, R; Baldisseri, A; Ball, M; Baral, R C; Barbano, A M; Barbera, R; Barile, F; Barioglio, L; Barnaföldi, G G; Barnby, L S; Barret, V; Bartalini, P; Barth, K; Bartke, J; Bartsch, E; Basile, M; Bastid, N; Basu, S; Bathen, B; Batigne, G; Batista Camejo, A; Batyunya, B; Batzing, P C; Bearden, I G; Beck, H; Bedda, C; Behera, N K; Belikov, I; Bellini, F; Bello Martinez, H; Bellwied, R; Beltran, L G E; Belyaev, V; Bencedi, G; Beole, S; Bercuci, A; Berdnikov, Y; Berenyi, D; Bertens, R A; Berzano, D; Betev, L; Bhasin, A; Bhat, I R; Bhati, A K; Bhattacharjee, B; Bhom, J; Bianchi, L; Bianchi, N; Bianchin, C; Bielčík, J; Bielčíková, J; Bilandzic, A; Biro, G; Biswas, R; Biswas, S; Blair, J T; Blau, D; Blume, C; Boca, G; Bock, F; Bogdanov, A; Boldizsár, L; Bombara, M; Bonomi, G; Bonora, M; Book, J; Borel, H; Borissov, A; Borri, M; Botta, E; Bourjau, C; Braun-Munzinger, P; Bregant, M; Broker, T A; Browning, T A; Broz, M; Brucken, E J; Bruna, E; Bruno, G E; Budnikov, D; Buesching, H; Bufalino, S; Buhler, P; Buitron, S A I; Buncic, P; Busch, O; Buthelezi, Z; Butt, J B; Buxton, J T; Cabala, J; Caffarri, D; Caines, H; Caliva, A; Calvo Villar, E; Camerini, P; Capon, A A; Carena, F; Carena, W; Carnesecchi, F; Castillo Castellanos, J; Castro, A J; Casula, E A R; Ceballos Sanchez, C; Cerello, P; Chang, B; Chapeland, S; Chartier, M; Charvet, J L; Chattopadhyay, S; Chattopadhyay, S; Chauvin, A; Cherney, M; Cheshkov, C; Cheynis, B; Chibante Barroso, V; Chinellato, D D; Cho, S; Chochula, P; Choi, K; Chojnacki, M; Choudhury, S; Christakoglou, P; Christensen, C H; Christiansen, P; Chujo, T; Chung, S U; Cicalo, C; Cifarelli, L; Cindolo, F; Cleymans, J; Colamaria, F; Colella, D; Collu, A; Colocci, M; Conesa Balbastre, G; Conesa Del Valle, Z; Connors, M E; Contreras, J G; Cormier, T M; Corrales Morales, Y; Cortés Maldonado, I; Cortese, P; Cosentino, M R; Costa, F; Costanza, S; Crkovská, J; Crochet, P; Cuautle, E; Cunqueiro, L; Dahms, T; Dainese, A; Danisch, M C; Danu, A; Das, D; Das, I; Das, S; Dash, A; Dash, S; De, S; De Caro, A; de Cataldo, G; de Conti, C; de Cuveland, J; De Falco, A; De Gruttola, D; De Marco, N; De Pasquale, S; De Souza, R D; Degenhardt, H F; Deisting, A; Deloff, A; Deplano, C; Dhankher, P; Di Bari, D; Di Mauro, A; Di Nezza, P; Di Ruzza, B; Diaz Corchero, M A; Dietel, T; Dillenseger, P; Divià, R; Djuvsland, Ø; Dobrin, A; Domenicis Gimenez, D; Dönigus, B; Dordic, O; Drozhzhova, T; Dubey, A K; Dubla, A; Ducroux, L; Duggal, A K; Dupieux, P; Ehlers, R J; Elia, D; Endress, E; Engel, H; Epple, E; Erazmus, B; Erhardt, F; Espagnon, B; Esumi, S; Eulisse, G; Eum, J; Evans, D; Evdokimov, S; Fabbietti, L; Fabris, D; Faivre, J; Fantoni, A; Fasel, M; Feldkamp, L; Feliciello, A; Feofilov, G; Ferencei, J; Fernández Téllez, A; Ferreiro, E G; Ferretti, A; Festanti, A; Feuillard, V J G; Figiel, J; Figueredo, M A S; Filchagin, S; Finogeev, D; Fionda, F M; Fiore, E M; Floris, M; Foertsch, S; Foka, P; Fokin, S; Fragiacomo, E; Francescon, A; Francisco, A; Frankenfeld, U; Fronze, G G; Fuchs, U; Furget, C; Furs, A; Fusco Girard, M; Gaardhøje, J J; Gagliardi, M; Gago, A M; Gajdosova, K; Gallio, M; Galvan, C D; Gangadharan, D R; Ganoti, P; Gao, C; Garabatos, C; Garcia-Solis, E; Garg, K; Garg, P; Gargiulo, C; Gasik, P; Gauger, E F; Gay Ducati, M B; Germain, M; Ghosh, P; Ghosh, S K; Gianotti, P; Giubellino, P; Giubilato, P; Gladysz-Dziadus, E; Glässel, P; Goméz Coral, D M; Gomez Ramirez, A; Gonzalez, A S; Gonzalez, V; González-Zamora, P; Gorbunov, S; Görlich, L; Gotovac, S; Grabski, V; Graczykowski, L K; Graham, K L; Gramling, J L; Greiner, L; Grelli, A; Grigoras, C; Grigoriev, V; Grigoryan, A; Grigoryan, S; Grion, N; Gronefeld, J M; Grosa, F; Grosse-Oetringhaus, J F; Grosso, R; Gruber, L; Grull, F R; Guber, F; Guernane, R; Guerzoni, B; Gulbrandsen, K; Gunji, T; Gupta, A; Gupta, R; Guzman, I B; Haake, R; Hadjidakis, C; Hamagaki, H; Hamar, G; Hamon, J C; Harris, J W; Harton, A; Hatzifotiadou, D; Hayashi, S; Heckel, S T; Hellbär, E; Helstrup, H; Herghelegiu, A; Herrera Corral, G; Herrmann, F; Hess, B A; Hetland, K F; Hillemanns, H; Hippolyte, B; Hladky, J; Horak, D; Hosokawa, R; Hristov, P; Hughes, C; Humanic, T J; Hussain, N; Hussain, T; Hutter, D; Hwang, D S; Ilkaev, R; Inaba, M; Ippolitov, M; Irfan, M; Isakov, V; Islam, M S; Ivanov, M; Ivanov, V; Izucheev, V; Jacak, B; Jacazio, N; Jacobs, P M; Jadhav, M B; Jadlovska, S; Jadlovsky, J; Jahnke, C; Jakubowska, M J; Janik, M A; Jayarathna, P H S Y; Jena, C; Jena, S; Jercic, M; Jimenez Bustamante, R T; Jones, P G; Jusko, A; Kalinak, P; Kalweit, A; Kang, J H; Kaplin, V; Kar, S; Karasu Uysal, A; Karavichev, O; Karavicheva, T; Karayan, L; Karpechev, E; Kebschull, U; Keidel, R; Keijdener, D L D; Keil, M; Ketzer, B; Mohisin Khan, M; Khan, P; Khan, S A; Khanzadeev, A; Kharlov, Y; Khatun, A; Khuntia, A; Kielbowicz, M M; Kileng, B; Kim, D W; Kim, D J; Kim, D; Kim, H; Kim, J S; Kim, J; Kim, M; Kim, M; Kim, S; Kim, T; Kirsch, S; Kisel, I; Kiselev, S; Kisiel, A; Kiss, G; Klay, J L; Klein, C; Klein, J; Klein-Bösing, C; Klewin, S; Kluge, A; Knichel, M L; Knospe, A G; Kobdaj, C; Kofarago, M; Kollegger, T; Kolojvari, A; Kondratiev, V; Kondratyeva, N; Kondratyuk, E; Konevskikh, A; Kopcik, M; Kour, M; Kouzinopoulos, C; Kovalenko, O; Kovalenko, V; Kowalski, M; Koyithatta Meethaleveedu, G; Králik, I; Kravčáková, A; Krivda, M; Krizek, F; Kryshen, E; Krzewicki, M; Kubera, A M; Kučera, V; Kuhn, C; Kuijer, P G; Kumar, A; Kumar, J; Kumar, L; Kumar, S; Kundu, S; Kurashvili, P; Kurepin, A; Kurepin, A B; Kuryakin, A; Kushpil, S; Kweon, M J; Kwon, Y; La Pointe, S L; La Rocca, P; Lagana Fernandes, C; Lakomov, I; Langoy, R; Lapidus, K; Lara, C; Lardeux, A; Lattuca, A; Laudi, E; Lavicka, R; Lazaridis, L; Lea, R; Leardini, L; Lee, S; Lehas, F; Lehner, S; Lehrbach, J; Lemmon, R C; Lenti, V; Leogrande, E; León Monzón, I; Lévai, P; Li, S; Li, X; Lien, J; Lietava, R; Lindal, S; Lindenstruth, V; Lippmann, C; Lisa, M A; Litichevskyi, V; Ljunggren, H M; Llope, W J; Lodato, D F; Loggins, V R; Loenne, P I; Loginov, V; Loizides, C; Loncar, P; Lopez, X; López Torres, E; Lowe, A; Luettig, P; Lunardon, M; Luparello, G; Lupi, M; Lutz, T H; Maevskaya, A; Mager, M; Mahajan, S; Mahmood, S M; Maire, A; Majka, R D; Malaev, M; Maldonado Cervantes, I; Malinina, L; Mal'Kevich, D; Malzacher, P; Mamonov, A; Manko, V; Manso, F; Manzari, V; Mao, Y; Marchisone, M; Mareš, J; Margagliotti, G V; Margotti, A; Margutti, J; Marín, A; Markert, C; Marquard, M; Martin, N A; Martinengo, P; Martinez, J A L; Martínez, M I; Martínez García, G; Martinez Pedreira, M; Mas, A; Masciocchi, S; Masera, M; Masoni, A; Mastroserio, A; Mathis, A M; Matyja, A; Mayer, C; Mazer, J; Mazzilli, M; Mazzoni, M A; Meddi, F; Melikyan, Y; Menchaca-Rocha, A; Meninno, E; Mercado Pérez, J; Meres, M; Mhlanga, S; Miake, Y; Mieskolainen, M M; Mihaylov, D; Mikhaylov, K; Milano, L; Milosevic, J; Mischke, A; Mishra, A N; Miśkowiec, D; Mitra, J; Mitu, C M; Mohammadi, N; Mohanty, B; Montes, E; Moreira De Godoy, D A; Moreno, L A P; Moretto, S; Morreale, A; Morsch, A; Muccifora, V; Mudnic, E; Mühlheim, D; Muhuri, S; Mukherjee, M; Mulligan, J D; Munhoz, M G; Münning, K; Munzer, R H; Murakami, H; Murray, S; Musa, L; Musinsky, J; Myers, C J; Naik, B; Nair, R; Nandi, B K; Nania, R; Nappi, E; Naru, M U; Natal da Luz, H; Nattrass, C; Navarro, S R; Nayak, K; Nayak, R; Nayak, T K; Nazarenko, S; Nedosekin, A; Negrao De Oliveira, R A; Nellen, L; Nesbo, S V; Ng, F; Nicassio, M; Niculescu, M; Niedziela, J; Nielsen, B S; Nikolaev, S; Nikulin, S; Nikulin, V; Noferini, F; Nomokonov, P; Nooren, G; Noris, J C C; Norman, J; Nyanin, A; Nystrand, J; Oeschler, H; Oh, S; Ohlson, A; Okubo, T; Olah, L; Oleniacz, J; Oliveira Da Silva, A C; Oliver, M H; Onderwaater, J; Oppedisano, C; Orava, R; Oravec, M; Ortiz Velasquez, A; Oskarsson, A; Otwinowski, J; Oyama, K; Ozdemir, M; Pachmayer, Y; Pacik, V; Pagano, D; Pagano, P; Paić, G; Pal, S K; Palni, P; Pan, J; Pandey, A K; Panebianco, S; Papikyan, V; Pappalardo, G S; Pareek, P; Park, J; Park, W J; Parmar, S; Passfeld, A; Pathak, S P; Paticchio, V; Patra, R N; Paul, B; Pei, H; Peitzmann, T; Peng, X; Pereira, L G; Pereira Da Costa, H; Peresunko, D; Perez Lezama, E; Peskov, V; Pestov, Y; Petráček, V; Petrov, V; Petrovici, M; Petta, C; Pezzi, R P; Piano, S; Pikna, M; Pillot, P; Pimentel, L O D L; Pinazza, O; Pinsky, L; Piyarathna, D B; Płoskoń, M; Planinic, M; Pluta, J; Pochybova, S; Podesta-Lerma, P L M; Poghosyan, M G; Polichtchouk, B; Poljak, N; Poonsawat, W; Pop, A; Poppenborg, H; Porteboeuf-Houssais, S; Porter, J; Pospisil, J; Pozdniakov, V; Prasad, S K; Preghenella, R; Prino, F; Pruneau, C A; Pshenichnov, I; Puccio, M; Puddu, G; Pujahari, P; Punin, V; Putschke, J; Qvigstad, H; Rachevski, A; Raha, S; Rajput, S; Rak, J; Rakotozafindrabe, A; Ramello, L; Rami, F; Rana, D B; Raniwala, R; Raniwala, S; Räsänen, S S; Rascanu, B T; Rathee, D; Ratza, V; Ravasenga, I; Read, K F; Redlich, K; Rehman, A; Reichelt, P; Reidt, F; Ren, X; Renfordt, R; Reolon, A R; Reshetin, A; Reygers, K; Riabov, V; Ricci, R A; Richert, T; Richter, M; Riedler, P; Riegler, W; Riggi, F; Ristea, C; Rodríguez Cahuantzi, M; Røed, K; Rogochaya, E; Rohr, D; Röhrich, D; Rokita, P S; Ronchetti, F; Ronflette, L; Rosnet, P; Rossi, A; Rotondi, A; Roukoutakis, F; Roy, A; Roy, C; Roy, P; Rubio Montero, A J; Rui, R; Russo, R; Rustamov, A; Ryabinkin, E; Ryabov, Y; Rybicki, A; Saarinen, S; Sadhu, S; Sadovsky, S; Šafařík, K; Saha, S K; Sahlmuller, B; Sahoo, B; Sahoo, P; Sahoo, R; Sahoo, S; Sahu, P K; Saini, J; Sakai, S; Saleh, M A; Salzwedel, J; Sambyal, S; Samsonov, V; Sandoval, A; Sarkar, D; Sarkar, N; Sarma, P; Sas, M H P; Scapparone, E; Scarlassara, F; Scharenberg, R P; Scheid, H S; Schiaua, C; Schicker, R; Schmidt, C; Schmidt, H R; Schmidt, M O; Schmidt, M; Schukraft, J; Schutz, Y; Schwarz, K; Schweda, K; Scioli, G; Scomparin, E; Scott, R; Šefčík, M; Seger, J E; Sekiguchi, Y; Sekihata, D; Selyuzhenkov, I; Senosi, K; Senyukov, S; Serradilla, E; Sett, P; Sevcenco, A; Shabanov, A; Shabetai, A; Shadura, O; Shahoyan, R; Shangaraev, A; Sharma, A; Sharma, A; Sharma, M; Sharma, M; Sharma, N; Sheikh, A I; Shigaki, K; Shou, Q; Shtejer, K; Sibiriak, Y; Siddhanta, S; Sielewicz, K M; Siemiarczuk, T; Silvermyr, D; Silvestre, C; Simatovic, G; Simonetti, G; Singaraju, R; Singh, R; Singhal, V; Sinha, T; Sitar, B; Sitta, M; Skaali, T B; Slupecki, M; Smirnov, N; Snellings, R J M; Snellman, T W; Song, J; Song, M; Soramel, F; Sorensen, S; Sozzi, F; Spiriti, E; Sputowska, I; Srivastava, B K; Stachel, J; Stan, I; Stankus, P; Stenlund, E; Stiller, J H; Stocco, D; Strmen, P; Suaide, A A P; Sugitate, T; Suire, C; Suleymanov, M; Suljic, M; Sultanov, R; Šumbera, M; Sumowidagdo, S; Suzuki, K; Swain, S; Szabo, A; Szarka, I; Szczepankiewicz, A; Szymanski, M; Tabassam, U; Takahashi, J; Tambave, G J; Tanaka, N; Tarhini, M; Tariq, M; Tarzila, M G; Tauro, A; Tejeda Muñoz, G; Telesca, A; Terasaki, K; Terrevoli, C; Teyssier, B; Thakur, D; Thakur, S; Thomas, D; Tieulent, R; Tikhonov, A; Timmins, A R; Toia, A; Tripathy, S; Trogolo, S; Trombetta, G; Trubnikov, V; Trzaska, W H; Trzeciak, B A; Tsuji, T; Tumkin, A; Turrisi, R; Tveter, T S; Ullaland, K; Umaka, E N; Uras, A; Usai, G L; Utrobicic, A; Vala, M; Van Der Maarel, J; Van Hoorne, J W; van Leeuwen, M; Vanat, T; Vande Vyvre, P; Varga, D; Vargas, A; Vargyas, M; Varma, R; Vasileiou, M; Vasiliev, A; Vauthier, A; Vázquez Doce, O; Vechernin, V; Veen, A M; Velure, A; Vercellin, E; Vergara Limón, S; Vernet, R; Vértesi, R; Vickovic, L; Vigolo, S; Viinikainen, J; Vilakazi, Z; Villalobos Baillie, O; Villatoro Tello, A; Vinogradov, A; Vinogradov, L; Virgili, T; Vislavicius, V; Vodopyanov, A; Völkl, M A; Voloshin, K; Voloshin, S A; Volpe, G; von Haller, B; Vorobyev, I; Voscek, D; Vranic, D; Vrláková, J; Wagner, B; Wagner, J; Wang, H; Wang, M; Watanabe, D; Watanabe, Y; Weber, M; Weber, S G; Weiser, D F; Wessels, J P; Westerhoff, U; Whitehead, A M; Wiechula, J; Wikne, J; Wilk, G; Wilkinson, J; Willems, G A; Williams, M C S; Windelband, B; Witt, W E; Yalcin, S; Yang, P; Yano, S; Yin, Z; Yokoyama, H; Yoo, I-K; Yoon, J H; Yurchenko, V; Zaccolo, V; Zaman, A; Zampolli, C; Zanoli, H J C; Zaporozhets, S; Zardoshti, N; Zarochentsev, A; Závada, P; Zaviyalov, N; Zbroszczyk, H; Zhalov, M; Zhang, H; Zhang, X; Zhang, Y; Zhang, C; Zhang, Z; Zhao, C; Zhigareva, N; Zhou, D; Zhou, Y; Zhou, Z; Zhu, H; Zhu, J; Zhu, X; Zichichi, A; Zimmermann, A; Zimmermann, M B; Zimmermann, S; Zinovjev, G; Zmeskal, J

    2017-06-02

    We present the first azimuthally differential measurements of the pion source size relative to the second harmonic event plane in Pb-Pb collisions at a center-of-mass energy per nucleon-nucleon pair of sqrt[s_{NN}]=2.76  TeV. The measurements have been performed in the centrality range 0%-50% and for pion pair transverse momenta 0.2

  18. And the first one now will later be last: Time-reversal in cormack-jolly-seber models

    USGS Publications Warehouse

    Nichols, James D.

    2016-01-01

    The models of Cormack, Jolly and Seber (CJS) are remarkable in providing a rich set of inferences about population survival, recruitment, abundance and even sampling probabilities from a seemingly limited data source: a matrix of 1's and 0's reflecting animal captures and recaptures at multiple sampling occasions. Survival and sampling probabilities are estimated directly in CJS models, whereas estimators for recruitment and abundance were initially obtained as derived quantities. Various investigators have noted that just as standard modeling provides direct inferences about survival, reversing the time order of capture history data permits direct modeling and inference about recruitment. Here we review the development of reverse-time modeling efforts, emphasizing the kinds of inferences and questions to which they seem well suited.

  19. Direct and Indirect Costs of Asthma Management in Greece: An Expert Panel Approach.

    PubMed

    Souliotis, Kyriakos; Kousoulakou, Hara; Hillas, Georgios; Bakakos, Petros; Toumbis, Michalis; Loukides, Stelios; Vassilakopoulos, Theodoros

    2017-01-01

    Asthma is a major cause of morbidity and mortality and is associated with significant economic burden worldwide. The objectives of this study were to map current resource use associated with the disease management and to estimate the annual direct and indirect costs per adult patient with asthma. A Delphi panel with seven leading pulmonologists was conducted. A semistructured questionnaire was developed to elicit data on resource use and treatment patterns. Unit costs from official, published sources were subsequently assigned to resource use to estimate direct medical costs. Indirect costs were estimated as number of work loss days. Cost base year was 2015, and the perspective adopted was that of the National Organization of Health Care Services Provision, as well as the societal. Patients with asthma are mainly managed by pulmonologists (71.4%) and secondarily by general practitioners and internists (28.6%). The annual cost of managing exacerbations was estimated at €273.1, while maintenance costs were estimated at €1,100.2 per year. Total costs of managing asthma per patient per year were estimated at €2,281.8, 64.4% of which represented direct medical costs. Of the direct costs, pharmaceutical treatment was the key driver, accounting for 63.9 and 41.2% of direct and total costs, respectively. Direct non-medical costs (patient travel and waiting time) were estimated at €152.3. Indirect costs accounted for 28.9% of total costs. Asthma is a chronic condition, the management of which constrains the already limited Greek health care resources. The increasing prevalence of the disease raises concerns as it could translate per patient costs into a significant burden for the Greek health care system. Thus, the prevention, self-management, and improved quality of care for asthma should find a place in the health policy agenda in Greece.

  20. Earthquake Directivity, Orientation, and Stress Drop Within the Subducting Plate at the Hikurangi Margin, New Zealand

    NASA Astrophysics Data System (ADS)

    Abercrombie, Rachel E.; Poli, Piero; Bannister, Stephen

    2017-12-01

    We develop an approach to calculate earthquake source directivity and rupture velocity for small earthquakes, using the whole source time function rather than just an estimate of the duration. We apply the method to an aftershock sequence within the subducting plate beneath North Island, New Zealand, and investigate its resolution. We use closely located, highly correlated empirical Green's function (EGF) events to obtain source time functions (STFs) for this well-recorded sequence. We stack the STFs from multiple EGFs at each station, to improve the stability of the STFs. Eleven earthquakes (M 3.3-4.5) have sufficient azimuthal coverage, and both P and S STFs, to investigate directivity. The time axis of each STF in turn is stretched to find the maximum correlation between all pairs of stations. We then invert for the orientation and rupture velocity of both unilateral and bilateral line sources that best match the observations. We determine whether they are distinguishable and investigate the effects of limited frequency bandwidth. Rupture orientations are resolvable for eight earthquakes, seven of which are predominantly unilateral, and all are consistent with rupture on planes similar to the main shock fault plane. Purely unilateral rupture is rarely distinguishable from asymmetric bilateral rupture, despite a good station distribution. Synthetic testing shows that rupture velocity is the least well-resolved parameter; estimates decrease with loss of high-frequency energy, and measurements are best considered minimum values. We see no correlation between rupture velocity and stress drop, and spatial stress drop variation cannot be explained as an artifact of varying rupture velocity.

  1. Projecting the potential evapotranspiration by coupling different formulations and input data reliabilities: The possible uncertainty source for climate change impacts on hydrological regime

    NASA Astrophysics Data System (ADS)

    Wang, Weiguang; Li, Changni; Xing, Wanqiu; Fu, Jianyu

    2017-12-01

    Representing atmospheric evaporating capability for a hypothetical reference surface, potential evapotranspiration (PET) determines the upper limit of actual evapotranspiration and is an important input to hydrological models. Due that present climate models do not give direct estimates of PET when simulating the hydrological response to future climate change, the PET must be estimated first and is subject to the uncertainty on account of many existing formulae and different input data reliabilities. Using four different PET estimation approaches, i.e., the more physically Penman (PN) equation with less reliable input variables, more empirical radiation-based Priestley-Taylor (PT) equation with relatively dependable downscaled data, the most simply temperature-based Hamon (HM) equation with the most reliable downscaled variable, and downscaling PET directly by the statistical downscaling model, this paper investigated the differences of runoff projection caused by the alternative PET methods by a well calibrated abcd monthly hydrological model. Three catchments, i.e., the Luanhe River Basin, the Source Region of the Yellow River and the Ganjiang River Basin, representing a large climatic diversity were chosen as examples to illustrate this issue. The results indicated that although similar monthly patterns of PET over the period 2021-2050 for each catchment were provided by the four methods, the magnitudes of PET were still slightly different, especially for spring and summer months in the Luanhe River Basin and the Source Region of the Yellow River with relatively dry climate feature. The apparent discrepancy in magnitude of change in future runoff and even the diverse change direction for summer months in the Luanhe River Basin and spring months in the Source Region of the Yellow River indicated that the PET method related uncertainty occurred, especially in the Luanhe River Basin and the Source Region of the Yellow River with smaller aridity index. Moreover, the possible reason of discrepancies in uncertainty between three catchments was quantitatively discussed by the contribution analysis based on climatic elasticity method. This study can provide beneficial reference to comprehensively understand the impacts of climate change on hydrological regime and thus improve the regional strategy for future water resource management.

  2. Design optimization for a wearable, gamma-ray and neutron sensitive, detector array with directionality estimation

    NASA Astrophysics Data System (ADS)

    Ayaz-Maierhafer, Birsen; Britt, Carl G.; August, Andrew J.; Qi, Hairong; Seifert, Carolyn E.; Hayward, Jason P.

    2017-10-01

    In this study, we report on a constrained optimization and tradeoff study of a hybrid, wearable detector array having directional sensing based upon gamma-ray occlusion. One resulting design uses CLYC detectors while the second feasibility design involves the coupling of gamma-ray-sensitive CsI scintillators and a rubber LiCaAlF6 (LiCAF) neutron detector. The detector systems' responses were investigated through simulation as a function of angle in a two-dimensional plane. The expected total counts, peak-to-total ratio, directionality performance, and detection of 40 K for accurate gain stabilization were considered in the optimization. Source directionality estimation was investigated using Bayesian algorithms. Gamma-ray energies of 122 keV, 662 keV, and 1332 keV were considered. The equivalent neutron capture response compared with 3 He was also investigated for both designs.

  3. Estimation of Aerosol Direct Radiative Effects from Satellite and In Situ Measurements

    NASA Technical Reports Server (NTRS)

    Bergstrom, Robert W.; Russell, Philip B.; Schmid, Beat; Redemann, Jens; McIntosh, Dawn

    2000-01-01

    Ames researchers have combined measurements from satellite, aircraft, and the surface to estimate the effect of airborne particles (aerosols) on the solar radiation over the North Atlantic region. These aerosols (which come from both natural and pollution sources) can reflect solar radiation, causing a cooling effect that opposes the warming caused by carbon dioxide. Recently, increased attention has been paid to aerosol effects to better understand the Earth climate system.

  4. Direct measurement of health care costs.

    PubMed

    Smith, Mark W; Barnett, Paul G

    2003-09-01

    Cost identification is fundamental to many economic analyses of health care. Health care costs are often derived from administrative databases. Unit costs may also be obtained from published studies. When these sources will not suffice (e.g., in evaluating interventions or programs), data may be gathered directly through observation and surveys. This article describes how to use direct measurement to estimate the cost of an intervention. The authors review the elements of cost determination, including study perspective, the range of elements to measure, and short-run versus long-run costs. They then discuss the advantages and drawbacks of alternative direct measurement methods such as time-and-motion studies, activity logs, and surveys of patients and managers. A parsimonious data collection effort is desirable, although study hypotheses and perspective should guide the endeavor. Special reference is made to data sources within the Department of Veterans Affairs (VA) health care system.

  5. OpCost: an open-source system for estimating costs of stand-level forest operations

    Treesearch

    Conor K. Bell; Robert F. Keefe; Jeremy S. Fried

    2017-01-01

    This report describes and documents the OpCost forest operations cost model, a key component of the BioSum analysis framework. OpCost is available in two editions: as a callable module for use with BioSum, and in a stand-alone edition that can be run directly from R. OpCost model logic and assumptions for this open-source tool are explained, references to the...

  6. Infrared emission associated with chemical reactions on Shuttle and SIRTF surfaces

    NASA Technical Reports Server (NTRS)

    Hollenbach, D. J.; Tielens, Alexander G. G. M.

    1984-01-01

    The infrared intensities which would be observed by the Shuttle Infrared Telescope Facility (SIRTF), and which are produced by surface chemistry following atmospheric impact on SIRTF and the shuttle are estimated. Three possible sources of reactants are analyzed: (1) direct atmospheric and scattered contaminant fluxes onto the shuttle's surface; (2) direct atmospheric and scattered contaminant fluxes onto the SIRTF sunshade; and (3) scattered fluxes onto the cold SIRTF mirror. The chemical reactions are primarily initiated by the dominent flux of reactive atomic oxygen on the surfaces. Using observations of the optical glow to constrain theoretical parameters, it is estimated for source (1) that the infrared glow on the SIRTF mirror will be comparable to the zodiacal background between 1 and 10 micron wavelengths. It is speculated that oxygen reacts with the atoms and the radicals bound in the organic molecules that reside on the shuttle and the Explorer surfaces. It is concluded that for source (2) that with suitable construction, a warm sunshade will produce insignificant infrared glow. It is noted that the atomic oxygen flux on the cold SIRTF mirror (3) is insufficient to produce significant infrared glow. Infrared absorption by the ice buildup on the mirror is also small.

  7. Modular neuron-based body estimation: maintaining consistency over different limbs, modalities, and frames of reference

    PubMed Central

    Ehrenfeld, Stephan; Herbort, Oliver; Butz, Martin V.

    2013-01-01

    This paper addresses the question of how the brain maintains a probabilistic body state estimate over time from a modeling perspective. The neural Modular Modality Frame (nMMF) model simulates such a body state estimation process by continuously integrating redundant, multimodal body state information sources. The body state estimate itself is distributed over separate, but bidirectionally interacting modules. nMMF compares the incoming sensory and present body state information across the interacting modules and fuses the information sources accordingly. At the same time, nMMF enforces body state estimation consistency across the modules. nMMF is able to detect conflicting sensory information and to consequently decrease the influence of implausible sensor sources on the fly. In contrast to the previously published Modular Modality Frame (MMF) model, nMMF offers a biologically plausible neural implementation based on distributed, probabilistic population codes. Besides its neural plausibility, the neural encoding has the advantage of enabling (a) additional probabilistic information flow across the separate body state estimation modules and (b) the representation of arbitrary probability distributions of a body state. The results show that the neural estimates can detect and decrease the impact of false sensory information, can propagate conflicting information across modules, and can improve overall estimation accuracy due to additional module interactions. Even bodily illusions, such as the rubber hand illusion, can be simulated with nMMF. We conclude with an outlook on the potential of modeling human data and of invoking goal-directed behavioral control. PMID:24191151

  8. Methods for Estimating the Uncertainty in Emergy Table-Form Models

    EPA Science Inventory

    Emergy studies have suffered criticism due to the lack of uncertainty analysis and this shortcoming may have directly hindered the wider application and acceptance of this methodology. Recently, to fill this gap, the sources of uncertainty in emergy analysis were described and an...

  9. 78 FR 32377 - Draft 2012 Marine Mammal Stock Assessment Reports

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-30

    ... stranding mortalities into broad diagnoses such as disease, human-interaction, mass-stranding, etc. [[Page... and trends, estimates of annual human-caused mortality and serious injury from all sources... direct human-caused mortality exceeds the potential biological removal level; (B) which, based on the...

  10. Methods for determining remanent and total magnetisations of magnetic sources - a review

    NASA Astrophysics Data System (ADS)

    Clark, David A.

    2014-07-01

    Assuming without evidence that magnetic sources are magnetised parallel to the geomagnetic field can seriously mislead interpretation and can result in drill holes missing their targets. This article reviews methods that are available for estimating, directly or indirectly, the natural remanent magnetisation (NRM) and total magnetisation of magnetic sources, noting the strengths and weaknesses of each approach. These methods are: (i) magnetic property measurements of samples; (ii) borehole magnetic measurements; (iii) inference of properties from petrographic/petrological information, supplemented by palaeomagnetic databases; (iv) constrained modelling/inversion of magnetic sources; (v) direct inversions of measured or calculated vector and gradient tensor data for simple sources; (vi) retrospective inference of magnetisation of a mined deposit by comparing magnetic data acquired pre- and post-mining; (vii) combined analysis of magnetic and gravity anomalies using Poisson's theorem; (viii) using a controlled magnetic source to probe the susceptibility distribution of the subsurface; (ix) Helbig-type analysis of gridded vector components, gradient tensor elements, and tensor invariants; (x) methods based on reduction to the pole and related transforms; and (xi) remote in situ determination of NRM direction, total magnetisation direction and Koenigsberger ratio by deploying dual vector magnetometers or a single combined gradiometer/magnetometer to monitor local perturbation of natural geomagnetic variations, operating in base station mode within a magnetic anomaly of interest. Characterising the total and remanent magnetisations of sources is important for several reasons. Knowledge of total magnetisation is often critical for accurate determination of source geometry and position. Knowledge of magnetic properties such as magnetisation intensity and Koenigsberger ratio constrains the likely magnetic mineralogy (composition and grain size) of a source, which gives an indication of its geological nature. Determining the direction of a stable ancient remanence gives an indication of the age of magnetisation, which provides useful information about the geological history of the source and its environs.

  11. Associations between Source-Specific Fine Particulate Matter and Emergency Department Visits for Respiratory Disease in Four U.S. Cities

    PubMed Central

    Krall, Jenna R.; Mulholland, James A.; Russell, Armistead G.; Balachandran, Sivaraman; Winquist, Andrea; Tolbert, Paige E.; Waller, Lance A.; Sarnat, Stefanie Ebelt

    2016-01-01

    Background: Short-term exposure to ambient fine particulate matter (PM2.5) concentrations has been associated with increased mortality and morbidity. Determining which sources of PM2.5 are most toxic can help guide targeted reduction of PM2.5. However, conducting multicity epidemiologic studies of sources is difficult because source-specific PM2.5 is not directly measured, and source chemical compositions can vary between cities. Objectives: We determined how the chemical composition of primary ambient PM2.5 sources varies across cities. We estimated associations between source-specific PM2.5 and respiratory disease emergency department (ED) visits and examined between-city heterogeneity in estimated associations. Methods: We used source apportionment to estimate daily concentrations of primary source-specific PM2.5 for four U.S. cities. For sources with similar chemical compositions between cities, we applied Poisson time-series regression models to estimate associations between source-specific PM2.5 and respiratory disease ED visits. Results: We found that PM2.5 from biomass burning, diesel vehicle, gasoline vehicle, and dust sources was similar in chemical composition between cities, but PM2.5 from coal combustion and metal sources varied across cities. We found some evidence of positive associations of respiratory disease ED visits with biomass burning PM2.5; associations with diesel and gasoline PM2.5 were frequently imprecise or consistent with the null. We found little evidence of associations with dust PM2.5. Conclusions: We introduced an approach for comparing the chemical compositions of PM2.5 sources across cities and conducted one of the first multicity studies of source-specific PM2.5 and ED visits. Across four U.S. cities, among the primary PM2.5 sources assessed, biomass burning PM2.5 was most strongly associated with respiratory health. Citation: Krall JR, Mulholland JA, Russell AG, Balachandran S, Winquist A, Tolbert PE, Waller LA, Sarnat SE. 2017. Associations between source-specific fine particulate matter and emergency department visits for respiratory disease in four U.S. cities. Environ Health Perspect 125:97–103; http://dx.doi.org/10.1289/EHP271 PMID:27315241

  12. Associations between Source-Specific Fine Particulate Matter and Emergency Department Visits for Respiratory Disease in Four U.S. Cities.

    PubMed

    Krall, Jenna R; Mulholland, James A; Russell, Armistead G; Balachandran, Sivaraman; Winquist, Andrea; Tolbert, Paige E; Waller, Lance A; Sarnat, Stefanie Ebelt

    2017-01-01

    Short-term exposure to ambient fine particulate matter (PM2.5) concentrations has been associated with increased mortality and morbidity. Determining which sources of PM2.5 are most toxic can help guide targeted reduction of PM2.5. However, conducting multicity epidemiologic studies of sources is difficult because source-specific PM2.5 is not directly measured, and source chemical compositions can vary between cities. We determined how the chemical composition of primary ambient PM2.5 sources varies across cities. We estimated associations between source-specific PM2.5 and respiratory disease emergency department (ED) visits and examined between-city heterogeneity in estimated associations. We used source apportionment to estimate daily concentrations of primary source-specific PM2.5 for four U.S. cities. For sources with similar chemical compositions between cities, we applied Poisson time-series regression models to estimate associations between source-specific PM2.5 and respiratory disease ED visits. We found that PM2.5 from biomass burning, diesel vehicle, gasoline vehicle, and dust sources was similar in chemical composition between cities, but PM2.5 from coal combustion and metal sources varied across cities. We found some evidence of positive associations of respiratory disease ED visits with biomass burning PM2.5; associations with diesel and gasoline PM2.5 were frequently imprecise or consistent with the null. We found little evidence of associations with dust PM2.5. We introduced an approach for comparing the chemical compositions of PM2.5 sources across cities and conducted one of the first multicity studies of source-specific PM2.5 and ED visits. Across four U.S. cities, among the primary PM2.5 sources assessed, biomass burning PM2.5 was most strongly associated with respiratory health. Citation: Krall JR, Mulholland JA, Russell AG, Balachandran S, Winquist A, Tolbert PE, Waller LA, Sarnat SE. 2017. Associations between source-specific fine particulate matter and emergency department visits for respiratory disease in four U.S. cities. Environ Health Perspect 125:97-103; http://dx.doi.org/10.1289/EHP271.

  13. Using LiF:Mg,Cu,P TLDs to estimate the absorbed dose to water in liquid water around an 192Ir brachytherapy source.

    PubMed

    Lucas, P Avilés; Aubineau-Lanièce, I; Lourenço, V; Vermesse, D; Cutarella, D

    2014-01-01

    The absorbed dose to water is the fundamental reference quantity for brachytherapy treatment planning systems and thermoluminescence dosimeters (TLDs) have been recognized as the most validated detectors for measurement of such a dosimetric descriptor. The detector response in a wide energy spectrum as that of an (192)Ir brachytherapy source as well as the specific measurement medium which surrounds the TLD need to be accounted for when estimating the absorbed dose. This paper develops a methodology based on highly sensitive LiF:Mg,Cu,P TLDs to directly estimate the absorbed dose to water in liquid water around a high dose rate (192)Ir brachytherapy source. Different experimental designs in liquid water and air were constructed to study the response of LiF:Mg,Cu,P TLDs when irradiated in several standard photon beams of the LNE-LNHB (French national metrology laboratory for ionizing radiation). Measurement strategies and Monte Carlo techniques were developed to calibrate the LiF:Mg,Cu,P detectors in the energy interval characteristic of that found when TLDs are immersed in water around an (192)Ir source. Finally, an experimental system was designed to irradiate TLDs at different angles between 1 and 11 cm away from an (192)Ir source in liquid water. Monte Carlo simulations were performed to correct measured results to provide estimates of the absorbed dose to water in water around the (192)Ir source. The dose response dependence of LiF:Mg,Cu,P TLDs with the linear energy transfer of secondary electrons followed the same variations as those of published results. The calibration strategy which used TLDs in air exposed to a standard N-250 ISO x-ray beam and TLDs in water irradiated with a standard (137)Cs beam provided an estimated mean uncertainty of 2.8% (k = 1) in the TLD calibration coefficient for irradiations by the (192)Ir source in water. The 3D TLD measurements performed in liquid water were obtained with a maximum uncertainty of 11% (k = 1) found at 1 cm from the source. Radial dose values in water were compared against published results of the American Association of Physicists in Medicine and the European Society for Radiotherapy and Oncology and no significant differences (maximum value of 3.1%) were found within uncertainties except for one position at 9 cm (5.8%). At this location the background contribution relative to the TLD signal is relatively small and an unexpected experimental fluctuation in the background estimate may have caused such a large discrepancy. This paper shows that reliable measurements with TLDs in complex energy spectra require a study of the detector dose response with the radiation quality and specific calibration methodologies which model accurately the experimental conditions where the detectors will be used. The authors have developed and studied a method with highly sensitive TLDs and contributed to its validation by comparison with results from the literature. This methodology can be used to provide direct estimates of the absorbed dose rate in water for irradiations with HDR (192)Ir brachytherapy sources.

  14. A hierarchical modeling approach to estimate regional acute health effects of particulate matter sources

    PubMed Central

    Krall, J. R.; Hackstadt, A. J.; Peng, R. D.

    2017-01-01

    Exposure to particulate matter (PM) air pollution has been associated with a range of adverse health outcomes, including cardiovascular disease (CVD) hospitalizations and other clinical parameters. Determining which sources of PM, such as traffic or industry, are most associated with adverse health outcomes could help guide future recommendations aimed at reducing harmful pollution exposure for susceptible individuals. Information obtained from multisite studies, which is generally more precise than information from a single location, is critical to understanding how PM impacts health and to informing local strategies for reducing individual-level PM exposure. However, few methods exist to perform multisite studies of PM sources, which are not generally directly observed, and adverse health outcomes. We developed SHARE, a hierarchical modeling approach that facilitates reproducible, multisite epidemiologic studies of PM sources. SHARE is a two-stage approach that first summarizes information about PM sources across multiple sites. Then, this information is used to determine how community-level (i.e. county- or city-level) health effects of PM sources should be pooled to estimate regional-level health effects. SHARE is a type of population value decomposition that aims to separate out regional-level features from site-level data. Unlike previous approaches for multisite epidemiologic studies of PM sources, the SHARE approach allows the specific PM sources identified to vary by site. Using data from 2000–2010 for 63 northeastern US counties, we estimated regional-level health effects associated with short-term exposure to major types of PM sources. We found PM from secondary sulfate, traffic, and metals sources was most associated with CVD hospitalizations. PMID:28098412

  15. A Comparison of Two Methods for Initiating Air Mass Back Trajectories

    NASA Astrophysics Data System (ADS)

    Putman, A.; Posmentier, E. S.; Faiia, A. M.; Sonder, L. J.; Feng, X.

    2014-12-01

    Lagrangian air mass tracking programs in back cast mode are a powerful tool for estimating the water vapor source of precipitation events. The altitudes above the precipitation site where particle's back trajectories begin influences the source estimation. We assume that precipitation comes from water vapor in condensing regions of the air column, so particles are placed in proportion to an estimated condensation profile. We compare two methods for estimating where condensation occurs and the resulting evaporation sites for 63 events at Barrow, AK. The first method (M1) uses measurements from a 35 GHz vertically resolved cloud radar (MMCR), and algorithms developed by Zhao and Garrett1 to calculate precipitation rate. The second method (M2) uses the Global Data Assimilation System reanalysis data in a lofting model. We assess how accurately M2, developed for global coverage, will perform in absence of direct cloud observations. Results from the two methods are statistically similar. The mean particle height estimated by M2 is, on average, 695 m (s.d. = 1800 m) higher than M1. The corresponding average vapor source estimated by M2 is 1.5⁰ (s.d. = 5.4⁰) south of M1. In addition, vapor sources for M2 relative to M1 have ocean surface temperatures averaging 1.1⁰C (s.d. = 3.5⁰C) warmer, and reported ocean surface relative humidities 0.31% (s.d. = 6.1%) drier. All biases except the latter are statistically significant (p = 0.02 for each). Results were skewed by events where M2 estimated very high altitudes of condensation. When M2 produced an average particle height less than 5000 m (89% of events), M2 estimated mean particle heights 76 m (s.d. = 741 m) higher than M1, corresponding to a vapor source 0.54⁰ (s.d. = 4.2⁰) south of M1. The ocean surface at the vapor source was an average of 0.35⁰C (s.d. = 2.35⁰C) warmer and ocean surface relative humidities were 0.02% (s.d. = 5.5%) wetter. None of the biases was statistically significant. If the vapor source meteorology estimated by M2 is used to determine vapor isotopic properties it would produce results similar to M1 in all cases except the occasional very high cloud. The methods strive to balance a sufficient number of tracked air masses for meaningful vapor source estimation with minimal computational time. Zhao, C and Garrett, T.J. 2008, J. Geophys. Res.

  16. Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode

    2010-05-01

    The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.

  17. Does Ocean Color Data Assimilation Improve Estimates of Global Ocean Inorganic Carbon?

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2012-01-01

    Ocean color data assimilation has been shown to dramatically improve chlorophyll abundances and distributions globally and regionally in the oceans. Chlorophyll is a proxy for phytoplankton biomass (which is explicitly defined in a model), and is related to the inorganic carbon cycle through the interactions of the organic carbon (particulate and dissolved) and through primary production where inorganic carbon is directly taken out of the system. Does ocean color data assimilation, whose effects on estimates of chlorophyll are demonstrable, trickle through the simulated ocean carbon system to produce improved estimates of inorganic carbon? Our emphasis here is dissolved inorganic carbon, pC02, and the air-sea flux. We use a sequential data assimilation method that assimilates chlorophyll directly and indirectly changes nutrient concentrations in a multi-variate approach. The results are decidedly mixed. Dissolved organic carbon estimates from the assimilation model are not meaningfully different from free-run, or unassimilated results, and comparisons with in situ data are similar. pC02 estimates are generally worse after data assimilation, with global estimates diverging 6.4% from in situ data, while free-run estimates are only 4.7% higher. Basin correlations are, however, slightly improved: r increase from 0.78 to 0.79, and slope closer to unity at 0.94 compared to 0.86. In contrast, air-sea flux of C02 is noticeably improved after data assimilation. Global differences decline from -0.635 mol/m2/y (stronger model sink from the atmosphere) to -0.202 mol/m2/y. Basin correlations are slightly improved from r=O.77 to r=0.78, with slope closer to unity (from 0.93 to 0.99). The Equatorial Atlantic appears as a slight sink in the free-run, but is correctly represented as a moderate source in the assimilation model. However, the assimilation model shows the Antarctic to be a source, rather than a modest sink and the North Indian basin is represented incorrectly as a sink rather than the source indicated by the free-run model and data estimates.

  18. Exposure assessment in investigations of waterborne illness: a quantitative estimate of measurement error

    PubMed Central

    Jones, Andria Q; Dewey, Catherine E; Doré, Kathryn; Majowicz, Shannon E; McEwen, Scott A; Waltner-Toews, David

    2006-01-01

    Background Exposure assessment is typically the greatest weakness of epidemiologic studies of disinfection by-products (DBPs) in drinking water, which largely stems from the difficulty in obtaining accurate data on individual-level water consumption patterns and activity. Thus, surrogate measures for such waterborne exposures are commonly used. Little attention however, has been directed towards formal validation of these measures. Methods We conducted a study in the City of Hamilton, Ontario (Canada) in 2001–2002, to assess the accuracy of two surrogate measures of home water source: (a) urban/rural status as assigned using residential postal codes, and (b) mapping of residential postal codes to municipal water systems within a Geographic Information System (GIS). We then assessed the accuracy of a commonly-used surrogate measure of an individual's actual drinking water source, namely, their home water source. Results The surrogates for home water source provided good classification of residents served by municipal water systems (approximately 98% predictive value), but did not perform well in classifying those served by private water systems (average: 63.5% predictive value). More importantly, we found that home water source was a poor surrogate measure of the individuals' actual drinking water source(s), being associated with high misclassification errors. Conclusion This study demonstrated substantial misclassification errors associated with a surrogate measure commonly used in studies of drinking water disinfection byproducts. Further, the limited accuracy of two surrogate measures of an individual's home water source heeds caution in their use in exposure classification methodology. While these surrogates are inexpensive and convenient, they should not be substituted for direct collection of accurate data pertaining to the subjects' waterborne disease exposure. In instances where such surrogates must be used, estimation of the misclassification and its subsequent effects are recommended for the interpretation and communication of results. Our results also lend support for further investigation into the quantification of the exposure misclassification associated with these surrogate measures, which would provide useful estimates for consideration in interpretation of waterborne disease studies. PMID:16729887

  19. Mass balance assessment for mercury in Lake Champlain

    USGS Publications Warehouse

    Gao, N.; Armatas, N.G.; Shanley, J.B.; Kamman, N.C.; Miller, E.K.; Keeler, G.J.; Scherbatskoy, T.; Holsen, T.M.; Young, T.; McIlroy, L.; Drake, S.; Olsen, Bill; Cady, C.

    2006-01-01

    A mass balance model for mercury in Lake Champlain was developed in an effort to understand the sources, inventories, concentrations, and effects of mercury (Hg) contamination in the lake ecosystem. To construct the mass balance model, air, water, and sediment were sampled as a part of this project and other research/monitoring projects in the Lake Champlain Basin. This project produced a STELLA-based computer model and quantitative apportionments of the principal input and output pathways of Hg for each of 13 segments in the lake. The model Hg concentrations in the lake were consistent with measured concentrations. Specifically, the modeling identified surface water inflows as the largest direct contributor of Hg into the lake. Direct wet deposition to the lake was the second largest source of Hg followed by direct dry deposition. Volatilization and sedimentation losses were identified as the two major removal mechanisms. This study significantly improves previous estimates of the relative importance of Hg input pathways and of wet and dry deposition fluxes of Hg into Lake Champlain. It also provides new estimates of volatilization fluxes across different lake segments and sedimentation loss in the lake. ?? 2006 American Chemical Society.

  20. New Methods For Interpretation Of Magnetic Gradient Tensor Data Using Eigenalysis And The Normalized Source Strength

    NASA Astrophysics Data System (ADS)

    Clark, D.

    2012-12-01

    In the future, acquisition of magnetic gradient tensor data is likely to become routine. New methods developed for analysis of magnetic gradient tensor data can also be applied to high quality conventional TMI surveys that have been processed using Fourier filtering techniques, or otherwise, to calculate magnetic vector and tensor components. This approach is, in fact, the only practical way at present to analyze vector component data, as measurements of vector components are seriously afflicted by motion noise, which is not as serious a problem for gradient components. In many circumstances, an optimal approach to extracting maximum information from magnetic surveys would be to combine analysis of measured gradient tensor data with vector components calculated from TMI measurements. New methods for inverting gradient tensor surveys to obtain source parameters have been developed for a number of elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, horizontal line current and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalized source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets and contacts, and is independent of magnetization direction for these sources (and only very weakly dependent on magnetization direction in general). In combination the NSS and its vector gradient enable estimation of the Euler structural index, thereby constraining source geometry, and determine source locations uniquely. NSS analysis can be extended to other useful models, such as vertical pipes, by calculating eigenvalues of the vertical derivative of the gradient tensor. Once source locations are determined, information of source magnetizations can be obtained by simple linear inversion of measured or calculated vector and/or tensor data. Inversions based on the vector gradient of the NSS over the Tallawang magnetite deposit in central New South Wales obtained good agreement between the inferred geometry of the tabular magnetite skarn body and drill hole intersections. Inverted magnetizations are consistent with magnetic property measurements on drill core samples from this deposit. Similarly, inversions of calculated tensor data over the Mount Leyshold gold-mineralized porphyry system in Queensland yield good estimates of the centroid location, total magnetic moment and magnetization direction of the magnetite-bearing potassic alteration zone that are consistent with geological and petrophysical information.

  1. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244

  2. Comparison of methane emission estimates from multiple measurement techniques at natural gas production pads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bell, Clay Samuel; Vaughn, Timothy L.; Zimmerle, Daniel

    This study presents the results of a campaign that estimated methane emissions at 268 gas production facilities in the Fayetteville shale gas play using onsite measurements (261 facilities) and two downwind methods - the dual tracer flux ratio method (Tracer Facility Estimate - TFE, 17 facilities) and the EPA Other Test Method 33a (OTM33A Facility Estimate - OFE, 50 facilities). A study onsite estimate (SOE) for each facility was developed by combining direct measurements and simulation of unmeasured emission sources, using operator activity data and emission data from literature. The SOE spans 0-403 kg/h and simulated methane emissions from liquidmore » unloadings account for 88% of total emissions estimated by the SOE, with 76% (95% CI [51%-92%]) contributed by liquid unloading at two facilities. TFE and SOE show overlapping 95% CI between individual estimates at 15 of 16 (94%) facilities where the measurements were paired, while OFE and SOE show overlapping 95% CI between individual estimates at 28 of 43 (65%) facilities. However, variance-weighted least-squares (VWLS) regressions performed on sets of paired estimates indicate statistically significant differences between methods. The SOE represents a lower bound of emissions at facilities where onsite direct measurements of continuously emitting sources are the primary contributor to the SOE, a sub-selection of facilities which minimizes expected inter-method differences for intermittent pneumatic controllers and the impact of episodically-emitting unloadings. At 9 such facilities, VWLS indicates that TFE estimates systematically higher emissions than SOE (TFE-to-SOE ratio = 1.6, 95% CI [1.2 to 2.1]). At 20 such facilities, VWLS indicates that OFE estimates systematically lower emissions than SOE (OFE-to-SOE ratio of 0.41 [0.26 to 0.90]). Given that SOE at these facilities is a lower limit on emissions, these results indicate that OFE is likely a less accurate method than SOE or TFE for this type of facility.« less

  3. Comparison of methane emission estimates from multiple measurement techniques at natural gas production pads

    DOE PAGES

    Bell, Clay Samuel; Vaughn, Timothy L.; Zimmerle, Daniel; ...

    2017-02-09

    This study presents the results of a campaign that estimated methane emissions at 268 gas production facilities in the Fayetteville shale gas play using onsite measurements (261 facilities) and two downwind methods - the dual tracer flux ratio method (Tracer Facility Estimate - TFE, 17 facilities) and the EPA Other Test Method 33a (OTM33A Facility Estimate - OFE, 50 facilities). A study onsite estimate (SOE) for each facility was developed by combining direct measurements and simulation of unmeasured emission sources, using operator activity data and emission data from literature. The SOE spans 0-403 kg/h and simulated methane emissions from liquidmore » unloadings account for 88% of total emissions estimated by the SOE, with 76% (95% CI [51%-92%]) contributed by liquid unloading at two facilities. TFE and SOE show overlapping 95% CI between individual estimates at 15 of 16 (94%) facilities where the measurements were paired, while OFE and SOE show overlapping 95% CI between individual estimates at 28 of 43 (65%) facilities. However, variance-weighted least-squares (VWLS) regressions performed on sets of paired estimates indicate statistically significant differences between methods. The SOE represents a lower bound of emissions at facilities where onsite direct measurements of continuously emitting sources are the primary contributor to the SOE, a sub-selection of facilities which minimizes expected inter-method differences for intermittent pneumatic controllers and the impact of episodically-emitting unloadings. At 9 such facilities, VWLS indicates that TFE estimates systematically higher emissions than SOE (TFE-to-SOE ratio = 1.6, 95% CI [1.2 to 2.1]). At 20 such facilities, VWLS indicates that OFE estimates systematically lower emissions than SOE (OFE-to-SOE ratio of 0.41 [0.26 to 0.90]). Given that SOE at these facilities is a lower limit on emissions, these results indicate that OFE is likely a less accurate method than SOE or TFE for this type of facility.« less

  4. Luminance-based specular gloss characterization.

    PubMed

    Leloup, Frédéric B; Pointer, Michael R; Dutré, Philip; Hanselaer, Peter

    2011-06-01

    Gloss is a feature of visual appearance that arises from the directionally selective reflection of light incident on a surface. Especially when a distinct reflected image is perceptible, the luminance distribution of the illumination scene above the sample can strongly influence the gloss perception. For this reason, industrial glossmeters do not provide a satisfactory gloss estimation of high-gloss surfaces. In this study, the influence of the conditions of illumination on specular gloss perception was examined through a magnitude estimation experiment in which 10 observers took part. A light booth with two light sources was utilized: the mirror image of only one source being visible in reflection by the observer. The luminance of both the reflected image and the adjacent sample surface could be independently varied by separate adjustment of the intensity of the two light sources. A psychophysical scaling function was derived, relating the visual gloss estimations to the measured luminance of both the reflected image and the off-specular sample background. The generalization error of the model was estimated through a validation experiment performed by 10 other observers. In result, a metric including both surface and illumination properties is provided. Based on this metric, improved gloss evaluation methods and instruments could be developed.

  5. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, James E.; Nichols, James D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, u i , provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in u i We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in u i . These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for u i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) u i ' is the appropriate estimator.

  6. Modeling and Forecasting Influenza-like Illness (ILI) in Houston, Texas Using Three Surveillance Data Capture Mechanisms.

    PubMed

    Paul, Susannah; Mgbere, Osaro; Arafat, Raouf; Yang, Biru; Santos, Eunice

    2017-01-01

    Objective The objective was to forecast and validate prediction estimates of influenza activity in Houston, TX using four years of historical influenza-like illness (ILI) from three surveillance data capture mechanisms. Background Using novel surveillance methods and historical data to estimate future trends of influenza-like illness can lead to early detection of influenza activity increases and decreases. Anticipating surges gives public health professionals more time to prepare and increase prevention efforts. Methods Data was obtained from three surveillance systems, Flu Near You, ILINet, and hospital emergency center (EC) visits, with diverse data capture mechanisms. Autoregressive integrated moving average (ARIMA) models were fitted to data from each source for week 27 of 2012 through week 26 of 2016 and used to forecast influenza-like activity for the subsequent 10 weeks. Estimates were then compared to actual ILI percentages for the same period. Results Forecasted estimates had wide confidence intervals that crossed zero. The forecasted trend direction differed by data source, resulting in lack of consensus about future influenza activity. ILINet forecasted estimates and actual percentages had the least differences. ILINet performed best when forecasting influenza activity in Houston, TX. Conclusion Though the three forecasted estimates did not agree on the trend directions, and thus, were considered imprecise predictors of long-term ILI activity based on existing data, pooling predictions and careful interpretations may be helpful for short term intervention efforts. Further work is needed to improve forecast accuracy considering the promise forecasting holds for seasonal influenza prevention and control, and pandemic preparedness.

  7. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  8. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    PubMed

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  9. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  10. Source phase shift - A new phenomenon in wave propagation due to anelasticity. [in free oscillations of earth model

    NASA Technical Reports Server (NTRS)

    Buland, R.; Yuen, D. A.; Konstanty, K.; Widmer, R.

    1985-01-01

    The free oscillations of an anelastic earth model due to earthquakes were calculated directly by means of the correspondence principle from wave propagation theory. The formulation made it possible to find the source phase which is not predictable using first order perturbation theory. The predicted source phase was largest for toroidal modes with source components proportional to the radial strain scalar instead of the radial displacement scalar. The source phase increased in relation to the overtone number. In addition, large relative differences were found in the excitation modulus and the phase when the elastic excitation was small. The effect was sufficient to bias estimates of source properties and elastic structure.

  11. The Chandra Source Catalog

    NASA Astrophysics Data System (ADS)

    Evans, Ian N.; Primini, Francis A.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Davis, John E.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G., II; Grier, John D.; Hain, Roger M.; Hall, Diane M.; Harbo, Peter N.; He, Xiangqun Helen; Houck, John C.; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Nowak, Michael A.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael S.; Van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2010-07-01

    The Chandra Source Catalog (CSC) is a general purpose virtual X-ray astrophysics facility that provides access to a carefully selected set of generally useful quantities for individual X-ray sources, and is designed to satisfy the needs of a broad-based group of scientists, including those who may be less familiar with astronomical data analysis in the X-ray regime. The first release of the CSC includes information about 94,676 distinct X-ray sources detected in a subset of public Advanced CCD Imaging Spectrometer imaging observations from roughly the first eight years of the Chandra mission. This release of the catalog includes point and compact sources with observed spatial extents lsim30''. The catalog (1) provides access to the best estimates of the X-ray source properties for detected sources, with good scientific fidelity, and directly supports scientific analysis using the individual source data; (2) facilitates analysis of a wide range of statistical properties for classes of X-ray sources; and (3) provides efficient access to calibrated observational data and ancillary data products for individual X-ray sources, so that users can perform detailed further analysis using existing tools. The catalog includes real X-ray sources detected with flux estimates that are at least 3 times their estimated 1σ uncertainties in at least one energy band, while maintaining the number of spurious sources at a level of lsim1 false source per field for a 100 ks observation. For each detected source, the CSC provides commonly tabulated quantities, including source position, extent, multi-band fluxes, hardness ratios, and variability statistics, derived from the observations in which the source is detected. In addition to these traditional catalog elements, for each X-ray source the CSC includes an extensive set of file-based data products that can be manipulated interactively, including source images, event lists, light curves, and spectra from each observation in which a source is detected.

  12. Cramer-Rao Bound, MUSIC, and Maximum Likelihood. Effects of Temporal Phase Difference

    DTIC Science & Technology

    1990-11-01

    Technical Report 1373 November 1990 Cramer-Rao Bound, MUSIC , And Maximum Likelihood Effects of Temporal Phase o Difference C. V. TranI OTIC Approved... MUSIC , and Maximum Likelihood (ML) asymptotic variances corresponding to the two-source direction-of-arrival estimation where sources were modeled as...1pI = 1.00, SNR = 20 dB ..................................... 27 2. MUSIC for two equipowered signals impinging on a 5-element ULA (a) IpI = 0.50, SNR

  13. Church Anchor Explosive Sources (SUS) Propagation Measurements

    DTIC Science & Technology

    1974-12-01

    execution of the exercise were carried out by an even larger group . Contributors from ARL/UT Analysis and Interpretation A. L. Anderson S. K. Mitchell T. D...ACODACs) located at sites A, C, :id. D, and a Multielement Super -direc.-tive Array (MESA) located at site E. The primary source track for the data, as shown...estimates, the narrowband spectra for the swot signai dztected in those ranges are inspected to determine the quality cf thb- data. Presumed signals

  14. Overview of the Global Nitrous Oxide Budget: The More We Think We Know, the Less We Really Know

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.

    2016-12-01

    The N2O budget is balanced in the real world, but our ability to account for past and present sources and sinks remains poor. This is true for both top-down atmospheric inversion models and bottom-up compilations of emission estimates by geographic region, economic sector, land use, and land management. Narrowing uncertainties would improve confidence in budgets and improve targeting of climate change mitigation. Estimates of the atmospheric lifetime of N2O range from 104 to 152 years, resulting in an uncertainty of nearly 5 Tg N2O-N/yr in atmospheric model inversion estimates of global sources. Top-down source estimates are also sensitive to the assumed pre-industrial, quasi-steady-state N2O concentration. However, land-use change and natural climatic variation in the centuries preceding the industrial revolution add uncertainty. While there is agreement that agricultural soils are now the largest single source of anthropogenic N2O emissions, recent estimates of direct emissions from fertilizer and manure application to soils range from 0.66 to 2.5 Tg N2O-N/yr. These discrepancies are due to differences in estimated activity data (application rates), in disaggregation of data by region and crop type, and in linear or nonlinear assumptions for estimating emission factors. Indirect N2O emissions (those occurring in downstream or downwind ecosystems receiving runoff or deposition derived from agricultural sources) have always been poorly constrained and difficult to estimate. It is unclear, for example, whether recent estimates of enhanced N2O emissions from oceans due to N inputs from land are already adequately accounted for by indirect emission estimates or are a previously underestimated source. Tropical deforestation generally results in a brief (months to years) increase in soil N2O emissions, followed by emissions from degraded lands that are lower than those of the original forest. The effect globally is probably a net reduction of soil emissions that should be included in global budgets, but that is poorly quantified and often ignored. Where land use change and management includes fire, pyrogenic emissions are important but still uncertain. N2O soil sinks are small globally, but present an interesting conundrum for our understanding of underlying processes of N2O consumption.

  15. Improvement of a plasma uniformity of the 2nd ion source of KSTAR neutral beam injector.

    PubMed

    Jeong, S H; Kim, T S; Lee, K W; Chang, D H; In, S R; Bae, Y S

    2014-02-01

    The 2nd ion source of KSTAR (Korea Superconducting Tokamak Advanced Research) NBI (Neutral Beam Injector) had been developed and operated since last year. A calorimetric analysis revealed that the heat load of the back plate of the ion source is relatively higher than that of the 1st ion source of KSTAR NBI. The spatial plasma uniformity of the ion source is not good. Therefore, we intended to identify factors affecting the uniformity of a plasma density and improve it. We estimated the effects of a direction of filament current and a magnetic field configuration of the plasma generator on the plasma uniformity. We also verified that the operation conditions of an ion source could change a uniformity of the plasma density of an ion source.

  16. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  17. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  18. Optimal wavefront estimation of incoherent sources

    NASA Astrophysics Data System (ADS)

    Riggs, A. J. Eldorado; Kasdin, N. Jeremy; Groff, Tyler

    2014-08-01

    Direct imaging is in general necessary to characterize exoplanets and disks. A coronagraph is an instrument used to create a dim (high-contrast) region in a star's PSF where faint companions can be detected. All coronagraphic high-contrast imaging systems use one or more deformable mirrors (DMs) to correct quasi-static aberrations and recover contrast in the focal plane. Simulations show that existing wavefront control algorithms can correct for diffracted starlight in just a few iterations, but in practice tens or hundreds of control iterations are needed to achieve high contrast. The discrepancy largely arises from the fact that simulations have perfect knowledge of the wavefront and DM actuation. Thus, wavefront correction algorithms are currently limited by the quality and speed of wavefront estimates. Exposures in space will take orders of magnitude more time than any calculations, so a nonlinear estimation method that needs fewer images but more computational time would be advantageous. In addition, current wavefront correction routines seek only to reduce diffracted starlight. Here we present nonlinear estimation algorithms that include optimal estimation of sources incoherent with a star such as exoplanets and debris disks.

  19. Regional pollution potential in the northwestern United States.

    Treesearch

    Sue A. Ferguson; Miriam L. Rorig

    2003-01-01

    The potential for air pollution from industrial sources to reach wilderness areas throughout the Northwestern United States is approximated from monthly mean emissions, along with wind speeds and directions. A simple index is derived to estimate downwind concentration. Maps of pollution potential were generated for each pollution component (particulates, sulfur oxides...

  20. Bias Corrections for Regional Estimates of the Time-averaged Geomagnetic Field

    NASA Astrophysics Data System (ADS)

    Constable, C.; Johnson, C. L.

    2009-05-01

    We assess two sources of bias in the time-averaged geomagnetic field (TAF) and paleosecular variation (PSV): inadequate temporal sampling, and the use of unit vectors in deriving temporal averages of the regional geomagnetic field. For the first temporal sampling question we use statistical resampling of existing data sets to minimize and correct for bias arising from uneven temporal sampling in studies of the time- averaged geomagnetic field (TAF) and its paleosecular variation (PSV). The techniques are illustrated using data derived from Hawaiian lava flows for 0-5~Ma: directional observations are an updated version of a previously published compilation of paleomagnetic directional data centered on ± 20° latitude by Lawrence et al./(2006); intensity data are drawn from Tauxe & Yamazaki, (2007). We conclude that poor temporal sampling can produce biased estimates of TAF and PSV, and resampling to appropriate statistical distribution of ages reduces this bias. We suggest that similar resampling should be attempted as a bias correction for all regional paleomagnetic data to be used in TAF and PSV modeling. The second potential source of bias is the use of directional data in place of full vector data to estimate the average field. This is investigated for the full vector subset of the updated Hawaiian data set. Lawrence, K.P., C.G. Constable, and C.L. Johnson, 2006, Geochem. Geophys. Geosyst., 7, Q07007, DOI 10.1029/2005GC001181. Tauxe, L., & Yamazkai, 2007, Treatise on Geophysics,5, Geomagnetism, Elsevier, Amsterdam, Chapter 13,p509

  1. Beyond seismic interferometry: imaging the earth's interior with virtual sources and receivers inside the earth

    NASA Astrophysics Data System (ADS)

    Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.

    2015-12-01

    Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.

  2. Anthropogenic combustion iron as a complex climate forcer

    DOE PAGES

    Matsui, Hitoshi; Mahowald, Natalie M.; Moteki, Nobuhiro; ...

    2018-04-23

    Atmospheric iron affects the global carbon cycle by modulating ocean biogeochemistry through the deposition of soluble iron to the ocean. Iron emitted by anthropogenic (fossil fuel) combustion is a source of soluble iron that is currently considered less important than other soluble iron sources, such as mineral dust and biomass burning. Here we show that the atmospheric burden of anthropogenic combustion iron is 8 times greater than previous estimates by incorporating recent measurements of anthropogenic magnetite into a global aerosol model. This new estimation increases the total deposition flux of soluble iron to southern oceans (30–90 °S) by 52%, withmore » a larger contribution of anthropogenic combustion iron than dust and biomass burning sources. The direct radiative forcing of anthropogenic magnetite is estimated to be 0.021 W m –2 globally and 0.22 W m –2 over East Asia. In conclusion, our results demonstrate that anthropogenic combustion iron is a larger and more complex climate forcer than previously thought, and therefore plays a key role in the Earth system.« less

  3. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  4. Anthropogenic combustion iron as a complex climate forcer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsui, Hitoshi; Mahowald, Natalie M.; Moteki, Nobuhiro

    Atmospheric iron affects the global carbon cycle by modulating ocean biogeochemistry through the deposition of soluble iron to the ocean. Iron emitted by anthropogenic (fossil fuel) combustion is a source of soluble iron that is currently considered less important than other soluble iron sources, such as mineral dust and biomass burning. Here we show that the atmospheric burden of anthropogenic combustion iron is 8 times greater than previous estimates by incorporating recent measurements of anthropogenic magnetite into a global aerosol model. This new estimation increases the total deposition flux of soluble iron to southern oceans (30–90 °S) by 52%, withmore » a larger contribution of anthropogenic combustion iron than dust and biomass burning sources. The direct radiative forcing of anthropogenic magnetite is estimated to be 0.021 W m –2 globally and 0.22 W m –2 over East Asia. In conclusion, our results demonstrate that anthropogenic combustion iron is a larger and more complex climate forcer than previously thought, and therefore plays a key role in the Earth system.« less

  5. Characterization of a gamma-ray source based on a laser-plasma accelerator with applications to radiography

    NASA Astrophysics Data System (ADS)

    Edwards, R. D.; Sinclair, M. A.; Goldsack, T. J.; Krushelnick, K.; Beg, F. N.; Clark, E. L.; Dangor, A. E.; Najmudin, Z.; Tatarakis, M.; Walton, B.; Zepf, M.; Ledingham, K. W. D.; Spencer, I.; Norreys, P. A.; Clarke, R. J.; Kodama, R.; Toyama, Y.; Tampo, M.

    2002-03-01

    The application of high intensity laser-produced gamma rays is discussed with regard to picosecond resolution deep-penetration radiography. The spectrum and angular distribution of these gamma rays is measured using an array of thermoluminescent detectors for both an underdense (gas) target and an overdense (solid) target. It is found that the use of an underdense target in a laser plasma accelerator configuration produces a much more intense and directional source. The peak dose is also increased significantly. Radiography is demonstrated in these experiments and the source size is also estimated.

  6. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  7. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  8. Focal mechanism of the seismic series prior to the 2011 El Hierro eruption

    NASA Astrophysics Data System (ADS)

    del Fresno, C.; Buforn, E.; Cesca, S.; Domínguez Cerdeña, I.

    2015-12-01

    The onset of the submarine eruption of El Hierro (10-Oct-2011) was preceded by three months of low-magnitude seismicity (Mw<4.0) characterized by a well documented hypocenter migration from the center to the south of the island. Seismic sources of this series have been studied in order to understand the physical process of magma migration. Different methodologies were used to obtain focal mechanisms of largest shocks. Firstly, we have estimated the joint fault plane solutions for 727 shocks using first motion P polarities to infer the stress pattern of the sequence and to determine the time evolution of principle axes orientation. Results show almost vertical T-axes during the first two months of the series and horizontal P-axes on N-S direction coinciding with the migration. Secondly, a point source MT inversion was performed with data of the largest 21 earthquakes of the series (M>3.5). Amplitude spectra was fitted at local distances (<20km). Reliability and stability of the results were evaluated with synthetic data. Results show a change in the focal mechanism pattern within the first days of October, varying from complex sources of higher non-double-couple components before that date to a simpler strike-slip mechanism with horizontal tension axes on E-W direction the week prior to the eruption onset. A detailed study was carried out for the 8 October 2011 earthquake (Mw=4.0). Focal mechanism was retrieved using a MT inversion at regional and local distances. Results indicate an important component of strike-slip fault and null isotropic component. The stress pattern obtained corresponds to horizontal compression in a NNW-SSE direction, parallel to the southern ridge of the island, and a quasi-horizontal extension in an EW direction. Finally, a simple source time function of 0.3s has been estimated for this shock using the Empirical Green function methodology.

  9. Female genital mutilation/cutting in Italy: an enhanced estimation for first generation migrant women based on 2016 survey data.

    PubMed

    Ortensi, Livia Elisa; Farina, Patrizia; Leye, Els

    2018-01-12

    Migration flows of women from Female Genital Mutilation/Cutting practicing countries have generated a need for data on women potentially affected by Female Genital Mutilation/Cutting. This paper presents enhanced estimates for foreign-born women and asylum seekers in Italy in 2016, with the aim of supporting resource planning and policy making, and advancing the methodological debate on estimation methods. The estimates build on the most recent methodological development in Female Genital Mutilation/Cutting direct and indirect estimation for Female Genital Mutilation/Cutting non-practicing countries. Direct estimation of prevalence was performed for 9 communities using the results of the survey FGM-Prev, held in Italy in 2016. Prevalence for communities not involved in the FGM-Prev survey was estimated using to the 'extrapolation-of-FGM/C countries prevalence data method' with corrections according to the selection hypothesis. It is estimated that 60 to 80 thousand foreign-born women aged 15 and over with Female Genital Mutilation/Cutting are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014-2016. Due to the long established presence of female migrants from some practicing communities Female Genital Mutilation/Cutting is emerging as an issue also among women aged 60 and over from selected communities. Female Genital Mutilation/Cutting is an additional source of concern for slightly more than 60% of women seeking asylum. Reliable estimates on Female Genital Mutilation/Cutting at country level are important for evidence-based policy making and service planning. This study suggests that indirect estimations cannot fully replace direct estimations, even if corrections for migrant socioeconomic selection can be implemented to reduce the bias.

  10. The impact of agricultural soil erosion on the global carbon cycle

    USGS Publications Warehouse

    Van Oost, Kristof; Quine, T.A.; Govers, G.; De Gryze, S.; Six, J.; Harden, J.W.; Ritchie, J.C.; McCarty, G.W.; Heckrath, G.; Kosmas, C.; Giraldez, J.V.; Marques Da Silva, J.R.; Merckx, R.

    2007-01-01

    Agricultural soil erosion is thought to perturb the global carbon cycle, but estimates of its effect range from a source of 1 petagram per year -1 to a sink of the same magnitude. By using caesium-137 and carbon inventory measurements from a large-scale survey, we found consistent evidence for an erosion-induced sink of atmospheric carbon equivalent to approximately 26% of the carbon transported by erosion. Based on this relationship, we estimated a global carbon sink of 0.12 (range 0.06 to 0.27) petagrams of carbon per year-1 resulting from erosion in the world's agricultural landscapes. Our analysis directly challenges the view that agricultural erosion represents an important source or sink for atmospheric CO2.

  11. Source parameter estimates of echolocation clicks from wild pygmy killer whales (Feresa attenuata) (L)

    NASA Astrophysics Data System (ADS)

    Madsen, P. T.; Kerr, I.; Payne, R.

    2004-10-01

    Pods of the little known pygmy killer whale (Feresa attenuata) in the northern Indian Ocean were recorded with a vertical hydrophone array connected to a digital recorder sampling at 320 kHz. Recorded clicks were directional, short (25 μs) transients with estimated source levels between 197 and 223 dB re. 1 μPa (pp). Spectra of clicks recorded close to or on the acoustic axis were bimodal with peak frequencies between 45 and 117 kHz, and with centroid frequencies between 70 and 85 kHz. The clicks share characteristics of echolocation clicks from similar sized, whistling delphinids, and have properties suited for the detection and classification of prey targeted by this odontocete. .

  12. Tsunami Simulation Method Assimilating Ocean Bottom Pressure Data Near a Tsunami Source Region

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro

    2018-02-01

    A new method was developed to reproduce the tsunami height distribution in and around the source area, at a certain time, from a large number of ocean bottom pressure sensors, without information on an earthquake source. A dense cabled observation network called S-NET, which consists of 150 ocean bottom pressure sensors, was installed recently along a wide portion of the seafloor off Kanto, Tohoku, and Hokkaido in Japan. However, in the source area, the ocean bottom pressure sensors cannot observe directly an initial ocean surface displacement. Therefore, we developed the new method. The method was tested and functioned well for a synthetic tsunami from a simple rectangular fault with an ocean bottom pressure sensor network using 10 arc-min, or 20 km, intervals. For a test case that is more realistic, ocean bottom pressure sensors with 15 arc-min intervals along the north-south direction and sensors with 30 arc-min intervals along the east-west direction were used. In the test case, the method also functioned well enough to reproduce the tsunami height field in general. These results indicated that the method could be used for tsunami early warning by estimating the tsunami height field just after a great earthquake without the need for earthquake source information.

  13. Measurements of methane emissions at natural gas production sites in the United States.

    PubMed

    Allen, David T; Torres, Vincent M; Thomas, James; Sullivan, David W; Harrison, Matthew; Hendler, Al; Herndon, Scott C; Kolb, Charles E; Fraser, Matthew P; Hill, A Daniel; Lamb, Brian K; Miskimins, Jennifer; Sawyer, Robert F; Seinfeld, John H

    2013-10-29

    Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67-3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ± 200 Gg). The estimate for comparable source categories in the EPA national inventory is ~1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production).

  14. Measurements of methane emissions at natural gas production sites in the United States

    PubMed Central

    Allen, David T.; Torres, Vincent M.; Thomas, James; Sullivan, David W.; Harrison, Matthew; Hendler, Al; Herndon, Scott C.; Kolb, Charles E.; Fraser, Matthew P.; Hill, A. Daniel; Lamb, Brian K.; Miskimins, Jennifer; Sawyer, Robert F.; Seinfeld, John H.

    2013-01-01

    Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67–3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ±200 Gg). The estimate for comparable source categories in the EPA national inventory is ∼1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production). PMID:24043804

  15. A novel approach for direct estimation of fresh groundwater discharge to an estuary

    USGS Publications Warehouse

    Ganju, Neil K.

    2011-01-01

    Coastal groundwater discharge is an important source of freshwater and nutrients to coastal and estuarine systems. Directly quantifying the spatially integrated discharge of fresh groundwater over a coastline is difficult due to spatial variability and limited observational methods. In this study, I applied a novel approach to estimate net freshwater discharge from a groundwater-fed tidal creek over a spring-neap cycle, with high temporal resolution. Acoustic velocity instruments measured tidal water fluxes while other sensors measured vertical and lateral salinity to estimate cross-sectionally averaged salinity. These measurements were used in a time-dependent version of Knudsen's salt balance calculation to estimate the fresh groundwater contribution to the tidal creek. The time-series of fresh groundwater discharge shows the dependence of fresh groundwater discharge on tidal pumping, and the large difference between monthly mean discharge and instantaneous discharge over shorter timescales. The approach developed here can be implemented over timescales from days to years, in any size estuary with dominant groundwater inputs and well-defined cross-sections. The approach also directly links delivery of groundwater from the watershed with fluxes to the coastal environment. Copyright. Published in 2011 by the American Geophysical Union.

  16. Wideband Direction of Arrival Estimation in the Presence of Unknown Mutual Coupling

    PubMed Central

    Li, Weixing; Zhang, Yue; Lin, Jianzhi; Guo, Rui; Chen, Zengping

    2017-01-01

    This paper investigates a subarray based algorithm for direction of arrival (DOA) estimation of wideband uniform linear array (ULA), under the presence of frequency-dependent mutual coupling effects. Based on the Toeplitz structure of mutual coupling matrices, the whole array is divided into the middle subarray and the auxiliary subarray. Then two-sided correlation transformation is applied to the correlation matrix of the middle subarray instead of the whole array. In this way, the mutual coupling effects can be eliminated. Finally, the multiple signal classification (MUSIC) method is utilized to derive the DOAs. For the condition when the blind angles exist, we refine DOA estimation by using a simple approach based on the frequency-dependent mutual coupling matrixes (MCMs). The proposed method can achieve high estimation accuracy without any calibration sources. It has a low computational complexity because iterative processing is not required. Simulation results validate the effectiveness and feasibility of the proposed algorithm. PMID:28178177

  17. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    PubMed

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  18. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  19. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-05-21

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.

  20. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-01-01

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410

  1. Rapid estimate of earthquake source duration: application to tsunami warning.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique; Jamelot, Anthony; Hyvernaud, Olivier

    2016-04-01

    We present a method for estimating the source duration of the fault rupture, based on the high-frequency envelop of teleseismic P-Waves, inspired from the original work of (Ni et al., 2005). The main interest of the knowledge of this seismic parameter is to detect abnormal low velocity ruptures that are the characteristic of the so called 'tsunami-earthquake' (Kanamori, 1972). The validation of the results of source duration estimated by this method are compared with two other independent methods : the estimated duration obtained by the Wphase inversion (Kanamori and Rivera, 2008, Duputel et al., 2012) and the duration calculated by the SCARDEC process that determines the source time function (M. Vallée et al., 2011). The estimated source duration is also confronted to the slowness discriminant defined by Newman and Okal, 1998), that is calculated routinely for all earthquakes detected by our tsunami warning process (named PDFM2, Preliminary Determination of Focal Mechanism, (Clément and Reymond, 2014)). Concerning the point of view of operational tsunami warning, the numerical simulations of tsunami are deeply dependent on the source estimation: better is the source estimation, better will be the tsunami forecast. The source duration is not directly injected in the numerical simulations of tsunami, because the cinematic of the source is presently totally ignored (Jamelot and Reymond, 2015). But in the case of a tsunami-earthquake that occurs in the shallower part of the subduction zone, we have to consider a source in a medium of low rigidity modulus; consequently, for a given seismic moment, the source dimensions will be decreased while the slip distribution increased, like a 'compact' source (Okal, Hébert, 2007). Inversely, a rapid 'snappy' earthquake that has a poor tsunami excitation power, will be characterized by higher rigidity modulus, and will produce weaker displacement and lesser source dimensions than 'normal' earthquake. References: CLément, J. and Reymond, D. (2014). New Tsunami Forecast Tools for the French Polynesia Tsunami Warning System. Pure Appl. Geophys, 171. DUPUTEL, Z., RIVERA, L., KANAMORI, H. and HAYES, G. (2012). Wphase source inversion for moderate to large earthquakes. Geophys. J. Intl.189, 1125-1147. Kanamori, H. (1972). Mechanism of tsunami earthquakes. Phys. Earth Planet. Inter. 6, 246-259. Kanamori, H. and Rivera, L. (2008). Source inversion of W phase : speeding up seismic tsunami warning. Geophys. J. Intl. 175, 222-238. Newman, A. and Okal, E. (1998). Teleseismic estimates of radiated seismic energy : The E=M0 discriminant for tsunami earthquakes. J. Geophys. Res. 103, 26885-26898. Ni, S., H. Kanamori, and D. Helmberger (2005), Energy radiation from the Sumatra earthquake, Nature, 434, 582. Okal, E.A., and H. Hébert (2007), Far-field modeling of the 1946 Aleutian tsunami, Geophys. J. Intl., 169, 1229-1238. Vallée, M., J. Charléty, A.M.G. Ferreira, B. Delouis, and J. Vergoz, SCARDEC : a new technique for the rapid determination of seismic moment magnitude, focal mechanism and source time functions for large earthquakes using body wave deconvolution, Geophys. J. Int., 184, 338-358, 2011.

  2. Lesion contrast and detection using sonoelastographic shear velocity imaging: preliminary results

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Parker, Kevin J.

    2007-03-01

    This paper assesses lesion contrast and detection using sonoelastographic shear velocity imaging. Shear wave interference patterns, termed crawling waves, for a two phase medium were simulated assuming plane wave conditions. Shear velocity estimates were computed using a spatial autocorrelation algorithm that operates in the direction of shear wave propagation for a given kernel size. Contrast was determined by analyzing shear velocity estimate transition between mediums. Experimental results were obtained using heterogeneous phantoms with spherical inclusions (5 or 10 mm in diameter) characterized by elevated shear velocities. Two vibration sources were applied to opposing phantom edges and scanned (orthogonal to shear wave propagation) with an ultrasound scanner equipped for sonoelastography. Demodulated data was saved and transferred to an external computer for processing shear velocity images. Simulation results demonstrate shear velocity transition between contrasting mediums is governed by both estimator kernel size and source vibration frequency. Experimental results from phantoms further indicates that decreasing estimator kernel size produces corresponding decrease in shear velocity estimate transition between background and inclusion material albeit with an increase in estimator noise. Overall, results demonstrate the ability to generate high contrast shear velocity images using sonoelastographic techniques and detect millimeter-sized lesions.

  3. Wireless Sensor Array Network DoA Estimation from Compressed Array Data via Joint Sparse Representation.

    PubMed

    Yu, Kai; Yin, Ming; Luo, Ji-An; Wang, Yingguan; Bao, Ming; Hu, Yu-Hen; Wang, Zhi

    2016-05-23

    A compressive sensing joint sparse representation direction of arrival estimation (CSJSR-DoA) approach is proposed for wireless sensor array networks (WSAN). By exploiting the joint spatial and spectral correlations of acoustic sensor array data, the CSJSR-DoA approach provides reliable DoA estimation using randomly-sampled acoustic sensor data. Since random sampling is performed at remote sensor arrays, less data need to be transmitted over lossy wireless channels to the fusion center (FC), and the expensive source coding operation at sensor nodes can be avoided. To investigate the spatial sparsity, an upper bound of the coherence of incoming sensor signals is derived assuming a linear sensor array configuration. This bound provides a theoretical constraint on the angular separation of acoustic sources to ensure the spatial sparsity of the received acoustic sensor array signals. The Cram e ´ r-Rao bound of the CSJSR-DoA estimator that quantifies the theoretical DoA estimation performance is also derived. The potential performance of the CSJSR-DoA approach is validated using both simulations and field experiments on a prototype WSAN platform. Compared to existing compressive sensing-based DoA estimation methods, the CSJSR-DoA approach shows significant performance improvement.

  4. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    PubMed

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  5. Planets as background noise sources in free space optical communications

    NASA Technical Reports Server (NTRS)

    Katz, J.

    1986-01-01

    Background noise generated by planets is the dominant noise source in most deep space direct detection optical communications systems. Earlier approximate analyses of this problem are based on simplified blackbody calculations and can yield results that may be inaccurate by up to an order of magnitude. Various other factors that need to be taken into consideration, such as the phase angle and the actual spectral dependence of the planet albedo, in order to obtain a more accurate estimate of the noise magnitude are examined.

  6. S-wave refraction survey of alluvial aggregate

    USGS Publications Warehouse

    Ellefsen, Karl J.; Tuttle, Gary J.; Williams, Jackie M.; Lucius, Jeffrey E.

    2005-01-01

    An S-wave refraction survey was conducted in the Yampa River valley near Steamboat Springs, Colo., to determine how well this method could map alluvium, a major source of construction aggregate. At the field site, about 1 m of soil overlaid 8 m of alluvium that, in turn, overlaid sedimentary bedrock. The traveltimes of the direct and refracted S-waves were used to construct velocity cross sections whose various regions were directly related to the soil, alluvium, and bed-rock. The cross sections were constrained to match geologic logs that were developed from drill-hole data. This constraint minimized the ambiguity in estimates of the thickness and the velocity of the alluvium, an ambiguity that is inherent to the S-wave refraction method. In the cross sections, the estimated S-wave velocity of the alluvium changed in the horizontal direction, and these changes were attributed to changes in composition of the alluvium. The estimated S-wave velocity of the alluvium was practically constant in the vertical direc-tion, indicating that the fine layering observed in the geologic logs could not be detected. The S-wave refraction survey, in conjunction with independent information such as geologic logs, was found to be suitable for mapping the thickness of the alluvium.

  7. Attenuation Tomography of Northern California and the Yellow Sea / Korean Peninsula from Coda-source Normalized and Direct Lg Amplitudes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ford, S R; Dreger, D S; Phillips, W S

    2008-07-16

    Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less

  8. A Bayesian Machine Learning Model for Estimating Building Occupancy from Open Source Data

    DOE PAGES

    Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.; ...

    2016-01-01

    Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less

  9. Direct measurements show decreasing methane emissions from natural gas local distribution systems in the United States.

    PubMed

    Lamb, Brian K; Edburg, Steven L; Ferrara, Thomas W; Howard, Touché; Harrison, Matthew R; Kolb, Charles E; Townsend-Small, Amy; Dyck, Wesley; Possolo, Antonio; Whetstone, James R

    2015-04-21

    Fugitive losses from natural gas distribution systems are a significant source of anthropogenic methane. Here, we report on a national sampling program to measure methane emissions from 13 urban distribution systems across the U.S. Emission factors were derived from direct measurements at 230 underground pipeline leaks and 229 metering and regulating facilities using stratified random sampling. When these new emission factors are combined with estimates for customer meters, maintenance, and upsets, and current pipeline miles and numbers of facilities, the total estimate is 393 Gg/yr with a 95% upper confidence limit of 854 Gg/yr (0.10% to 0.22% of the methane delivered nationwide). This fraction includes emissions from city gates to the customer meter, but does not include other urban sources or those downstream of customer meters. The upper confidence limit accounts for the skewed distribution of measurements, where a few large emitters accounted for most of the emissions. This emission estimate is 36% to 70% less than the 2011 EPA inventory, (based largely on 1990s emission data), and reflects significant upgrades at metering and regulating stations, improvements in leak detection and maintenance activities, as well as potential effects from differences in methodologies between the two studies.

  10. Uncertainty Propagation for Terrestrial Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Mezian, c.; Vallet, Bruno; Soheilian, Bahman; Paparoditis, Nicolas

    2016-06-01

    Laser scanners are used more and more in mobile mapping systems. They provide 3D point clouds that are used for object reconstruction and registration of the system. For both of those applications, uncertainty analysis of 3D points is of great interest but rarely investigated in the literature. In this paper we present a complete pipeline that takes into account all the sources of uncertainties and allows to compute a covariance matrix per 3D point. The sources of uncertainties are laser scanner, calibration of the scanner in relation to the vehicle and direct georeferencing system. We suppose that all the uncertainties follow the Gaussian law. The variances of the laser scanner measurements (two angles and one distance) are usually evaluated by the constructors. This is also the case for integrated direct georeferencing devices. Residuals of the calibration process were used to estimate the covariance matrix of the 6D transformation between scanner laser and the vehicle system. Knowing the variances of all sources of uncertainties, we applied uncertainty propagation technique to compute the variance-covariance matrix of every obtained 3D point. Such an uncertainty analysis enables to estimate the impact of different laser scanners and georeferencing devices on the quality of obtained 3D points. The obtained uncertainty values were illustrated using error ellipsoids on different datasets.

  11. Beyond Hammers and Nails: Mitigating and Verifying Greenhouse Gas Emissions

    NASA Astrophysics Data System (ADS)

    Gurney, Kevin Robert

    2013-05-01

    One of the biggest challenges to future international agreements on climate change is an independent, science-driven method of verifying reductions in greenhouse gas emissions (GHG) [Niederberger and Kimble, 2011]. The scientific community has thus far emphasized atmospheric measurements to assess changes in emissions. An alternative is direct measurement or estimation of fluxes at the source. Given the many challenges facing the approach that uses "top-down" atmospheric measurements and recent advances in "bottom-up" estimation methods, I challenge the current doctrine, which has the atmospheric measurement approach "validating" bottom-up, "good-faith" emissions estimation [Balter, 2012] or which holds that the use of bottom-up estimation is like "dieting without weighing oneself" [Nisbet and Weiss, 2010].

  12. Linear Vector Quantisation and Uniform Circular Arrays based decoupled two-dimensional angle of arrival estimation

    NASA Astrophysics Data System (ADS)

    Ndaw, Joseph D.; Faye, Andre; Maïga, Amadou S.

    2017-05-01

    Artificial neural networks (ANN)-based models are efficient ways of source localisation. However very large training sets are needed to precisely estimate two-dimensional Direction of arrival (2D-DOA) with ANN models. In this paper we present a fast artificial neural network approach for 2D-DOA estimation with reduced training sets sizes. We exploit the symmetry properties of Uniform Circular Arrays (UCA) to build two different datasets for elevation and azimuth angles. Linear Vector Quantisation (LVQ) neural networks are then sequentially trained on each dataset to separately estimate elevation and azimuth angles. A multilevel training process is applied to further reduce the training sets sizes.

  13. SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity.

    PubMed

    Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I

    2017-08-15

    Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. A revised ground-motion and intensity interpolation scheme for shakemap

    USGS Publications Warehouse

    Worden, C.B.; Wald, D.J.; Allen, T.I.; Lin, K.; Garcia, D.; Cua, G.

    2010-01-01

    We describe a weighted-average approach for incorporating various types of data (observed peak ground motions and intensities and estimates from groundmotion prediction equations) into the ShakeMap ground motion and intensity mapping framework. This approach represents a fundamental revision of our existing ShakeMap methodology. In addition, the increased availability of near-real-time macroseismic intensity data, the development of newrelationships between intensity and peak ground motions, and new relationships to directly predict intensity from earthquake source information have facilitated the inclusion of intensity measurements directly into ShakeMap computations. Our approach allows for the combination of (1) direct observations (ground-motion measurements or reported intensities), (2) observations converted from intensity to ground motion (or vice versa), and (3) estimated ground motions and intensities from prediction equations or numerical models. Critically, each of the aforementioned data types must include an estimate of its uncertainties, including those caused by scaling the influence of observations to surrounding grid points and those associated with estimates given an unknown fault geometry. The ShakeMap ground-motion and intensity estimates are an uncertainty-weighted combination of these various data and estimates. A natural by-product of this interpolation process is an estimate of total uncertainty at each point on the map, which can be vital for comprehensive inventory loss calculations. We perform a number of tests to validate this new methodology and find that it produces a substantial improvement in the accuracy of ground-motion predictions over empirical prediction equations alone.

  15. Comparing U.S. Injury Death Estimates from GBD 2015 and CDC WONDER.

    PubMed

    Wu, Yue; Cheng, Xunjie; Ning, Peishan; Cheng, Peixia; Schwebel, David C; Hu, Guoqing

    2018-01-07

    Objective : The purpose of the present study was to examine consistency in injury death statistics from the United States CDC Wide-ranging Online Data for Epidemiologic Research (CDC WONDER) with those from GBD 2015 estimates. Methods : Differences in deaths and the percent difference in deaths between GBD 2015 and CDC WONDER were assessed, as were changes in deaths between 2000 and 2015 for the two datasets. Results : From 2000 to 2015, GBD 2015 estimates for the U.S. injury deaths were somewhat higher than CDC WONDER estimates in most categories, with the exception of deaths from falls and from forces of nature, war, and legal intervention in 2015. Encouragingly, the difference in total injury deaths between the two data sources narrowed from 44,897 (percent difference in deaths = 41%) in 2000 to 34,877 (percent difference in deaths = 25%) in 2015. Differences in deaths and percent difference in deaths between the two data sources varied greatly across injury cause and over the assessment years. The two data sources present consistent changes in direction from 2000 to 2015 for all injury causes except for forces of nature, war, and legal intervention, and adverse effects of medical treatment. Conclusions : We conclude that further studies are warranted to interpret the inconsistencies in data and develop estimation approaches that increase the consistency of the two datasets.

  16. Bayesian estimation of source parameters and associated Coulomb failure stress changes for the 2005 Fukuoka (Japan) Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes

    2018-04-01

    Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.

  17. Aerosol Direct Radiative Effects and Heating in the New Era of Active Satellite Observations

    NASA Astrophysics Data System (ADS)

    Matus, Alexander V.

    Atmospheric aerosols impact the global energy budget by scattering and absorbing solar radiation. Despite their impacts, aerosols remain a significant source of uncertainty in our ability to predict future climate. Multi-sensor observations from the A-Train satellite constellation provide valuable observational constraints necessary to reduce uncertainties in model simulations of aerosol direct effects. This study will discuss recent efforts to quantify aerosol direct effects globally and regionally using CloudSat's radiative fluxes and heating rates product. Improving upon previous techniques, this approach leverages the capability of CloudSat and CALIPSO to retrieve vertically resolved estimates of cloud and aerosol properties critical for accurately evaluating the radiative impacts of aerosols. We estimate the global annual mean aerosol direct effect to be -1.9 +/- 0.6 W/m2, which is in better agreement with previously published estimates from global models than previous satellite-based estimates. Detailed comparisons against a fully coupled simulation of the Community Earth System Model, however, reveal that this agreement on the global annual mean masks large regional discrepancies between modeled and observed estimates of aerosol direct effects related to model biases in cloud cover. A low bias in stratocumulus cloud cover over the southeastern Pacific Ocean, for example, leads to an overestimate of the radiative effects of marine aerosols. Stratocumulus clouds over the southeastern Atlantic Ocean can enhance aerosol absorption by 50% allowing aerosol layers to remain self-lofted in an area of subsidence. Aerosol heating is found to peak at 0.6 +/- 0.3 K/day an altitude of 4 km in September when biomass burning reaches a maximum. Finally, the contributions of observed aerosols components are evaluated to estimate the direct radiative forcing of anthropogenic aerosols. Aerosol forcing is computed using satellite-based radiative kernels that describe the sensitivity of shortwave fluxes in response to aerosol optical depth. The direct radiative forcing is estimated to be -0.21 W/m2 with the largest contributions from pollution that is partially offset by a positive forcing from smoke aerosols. The results from these analyses provide new benchmarks on the global radiative effects of aerosols and offer new insights for improving future assessments.

  18. Dose rate estimation around a 60Co gamma-ray irradiation source by means of 115mIn photoactivation.

    PubMed

    Murataka, Ayanori; Endo, Satoru; Kojima, Yasuaki; Shizuma, Kiyoshi

    2010-01-01

    Photoactivation of nuclear isomer (115m)In with a halflife of 4.48 h occurs by (60)Co gamma-ray irradiation. This is because the resonance gamma-ray absorption occurs at 1078 keV level for stable (115)In, and that energy gamma-rays are produced by Compton scattering of (60)Co primary gamma-rays. In this work, photoactivation of (115m)In was applied to estimate the dose rate distribution around a (60)Co irradiation source utilizing a standard dose rate taken by alanine dosimeter. The (115m)In photoactivation was measured at 10 to 160 cm from the (60)Co source. The derived dose rate distribution shows a good agreement with both alanine dosimeter data and Monte Carlo simulation. It is found that angular distribution of the dose rate along a circumference at radius 2.8 cm from the central axis shows +/- 10% periodical variation reflecting the radioactive strength of the source rods, but less periodic distribution at radius 10 and 20 cm. The (115m)In photoactivation along the vertical direction in the central irradiation port strongly depends on the height and radius as indicated by Monte Carlo simulation. It is demonstrated that (115m)In photoactivation is a convenient method to estimate the dose rate distribution around a (60)Co source.

  19. Deblending of simultaneous-source data using iterative seislet frame thresholding based on a robust slope estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Han, Chunying; Chi, Yue

    2018-06-01

    In a simultaneous source survey, no limitation is required for the shot scheduling of nearby sources and thus a huge acquisition efficiency can be obtained but at the same time making the recorded seismic data contaminated by strong blending interference. In this paper, we propose a multi-dip seislet frame based sparse inversion algorithm to iteratively separate simultaneous sources. We overcome two inherent drawbacks of traditional seislet transform. For the multi-dip problem, we propose to apply a multi-dip seislet frame thresholding strategy instead of the traditional seislet transform for deblending simultaneous-source data that contains multiple dips, e.g., containing multiple reflections. The multi-dip seislet frame strategy solves the conflicting dip problem that degrades the performance of the traditional seislet transform. For the noise issue, we propose to use a robust dip estimation algorithm that is based on velocity-slope transformation. Instead of calculating the local slope directly using the plane-wave destruction (PWD) based method, we first apply NMO-based velocity analysis and obtain NMO velocities for multi-dip components that correspond to multiples of different orders, then a fairly accurate slope estimation can be obtained using the velocity-slope conversion equation. An iterative deblending framework is given and validated through a comprehensive analysis over both numerical synthetic and field data examples.

  20. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    NASA Astrophysics Data System (ADS)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  1. Automatic streak endpoint localization from the cornerness metric

    NASA Astrophysics Data System (ADS)

    Sease, Brad; Flewelling, Brien; Black, Jonathan

    2017-05-01

    Streaked point sources are a common occurrence when imaging unresolved space objects from both ground- and space-based platforms. Effective localization of streak endpoints is a key component of traditional techniques in space situational awareness related to orbit estimation and attitude determination. To further that goal, this paper derives a general detection and localization method for streak endpoints based on the cornerness metric. Corners detection involves searching an image for strong bi-directional gradients. These locations typically correspond to robust structural features in an image. In the case of unresolved imagery, regions with a high cornerness score correspond directly to the endpoints of streaks. This paper explores three approaches for global extraction of streak endpoints and applies them to an attitude and rate estimation routine.

  2. Assessing the Prospects for Employment in an Expansion of US Aquaculture

    NASA Astrophysics Data System (ADS)

    Ngo, N.

    2006-12-01

    The United States imports 60 percent of its seafood, leading to a 7 billion seafood trade deficit. To mitigate this deficit, the National Oceanographic and Atmospheric Administration (NOAA), a branch of the U.S. Department of Commerce, has promoted the expansion of U.S. production of seafood by aquaculture. NOAA projects that the future expansion of a U.S. aquaculture industry could produce as much as 5 billion in annual sales. NOAA claims that one of the benefits of this expansion would be an increase in employment from 180,000 to 600,000 persons (100,000 indirect jobs and 500,000 direct jobs). Sources of these estimates and the assumptions upon which they are based are unclear, however. The Marine Aquaculture Task Force (MATF), an independent scientific panel, has been skeptical of NOAA's employment estimates, claiming that its sources of information are weak and based upon dubious assumptions. If NOAA has exaggerated its employment projections, then the benefits from an expansion of U.S. aquaculture production would not be as large as projected. y study examined published estimates of labor productivity from the domestic and foreign aquaculture of a variety of species, and I projected the potential increase in employment associated with a 5 billion aquaculture industry, as proposed by NOAA. Results showed that employment estimates will range from only 40,000 to 128,000 direct jobs by 2025 as a consequence of the proposed expansion. Consequently, NOAA may have overestimated its employment projections-?possibly by as much as 170 percent, implying that NOAA's employment estimate requires further research or adjustment.

  3. Hyperedge bundling: Data, source code, and precautions to modeling-accuracy bias to synchrony estimates.

    PubMed

    Wang, Sheng H; Lobier, Muriel; Siebenhühner, Felix; Puoliväli, Tuomas; Palva, Satu; Palva, J Matias

    2018-06-01

    It has not been well documented that MEG/EEG functional connectivity graphs estimated with zero-lag-free interaction metrics are severely confounded by a multitude of spurious interactions (SI), i.e., the false-positive "ghosts" of true interactions [1], [2]. These SI are caused by the multivariate linear mixing between sources, and thus they pose a severe challenge to the validity of connectivity analysis. Due to the complex nature of signal mixing and the SI problem, there is a need to intuitively demonstrate how the SI are discovered and how they can be attenuated using a novel approach that we termed hyperedge bundling. Here we provide a dataset with software with which the readers can perform simulations in order to better understand the theory and the solution to SI. We include the supplementary material of [1] that is not directly relevant to the hyperedge bundling per se but reflects important properties of the MEG source model and the functional connectivity graphs. For example, the gyri of dorsal-lateral cortices are the most accurately modeled areas; the sulci of inferior temporal, frontal and the insula have the least modeling accuracy. Importantly, we found the interaction estimates are heavily biased by the modeling accuracy between regions, which means the estimates cannot be straightforwardly interpreted as the coupling between brain regions. This raise a red flag that the conventional method of thresholding graphs by estimate values is rather suboptimal: because the measured topology of the graph reflects the geometric property of source-model instead of the cortical interactions under investigation.

  4. CO2 fluxes from a tropical neighborhood: sources and sinks

    NASA Astrophysics Data System (ADS)

    Velasco, E.; Roth, M.; Tan, S.; Quak, M.; Britter, R.; Norford, L.

    2011-12-01

    Cities are the main contributors to the CO2 rise in the atmosphere. The CO2 released from the various emission sources is typically quantified by a bottom-up aggregation process that accounts for emission factors and fossil fuel consumption data. This approach does not consider the heterogeneity and variability of the urban emission sources, and error propagation can result in large uncertainties. In this context, direct measurements of CO2 fluxes that include all major and minor anthropogenic and natural sources and sinks from a specific district can be used to evaluate emission inventories. This study reports and compares CO2 fluxes measured directly using the eddy covariance method with emissions estimated by emissions factors and activity data for a residential neighborhood of Singapore, a highly populated and urbanized tropical city. The flux measurements were conducted during one year. No seasonal variability was found as a consequence of the constant climate conditions of tropical places; but a clear diurnal pattern with morning and late afternoon peaks in phase with the rush-hour traffic was observed. The magnitude of the fluxes throughout daylight hours is modulated by the urban vegetation, which is abundant in terms of biomass but not of land-cover (15%). Even though the carbon uptake by vegetation is significant, it does not exceed the anthropogenic emissions and the monitored district is a net CO2 source of 20.3 ton km-2 day-1 on average. The carbon uptake by vegetation is investigated as the difference between the estimated emissions and the measured fluxes during daytime.

  5. Laboratory study of PCB transport from primary sources to settled dust.

    PubMed

    Liu, Xiaoyu; Guo, Zhishi; Krebs, Kenneth A; Greenwell, Dale J; Roache, Nancy F; Stinson, Rayford A; Nardin, Joshua A; Pope, Robert H

    2016-04-01

    Dust is an important sink for indoor air pollutants, such as polychlorinated biphenyls (PCBs) that were used in building materials and products. In this study, two types of dust, house dust and Arizona Test Dust, were tested in a 30-m(3) stainless steel chamber with two types of panels. The PCB-containing panels were aluminum sheets coated with a PCB-spiked primer or caulk. The PCB-free panels were coated with the same materials but without PCBs. The dust evenly spread on each panel was collected at different times to determine its PCB content. The data from the PCB panels were used to evaluate the PCB migration from the source to the dust through direct contact, and the data from the PCB-free panels were used to evaluate the sorption of PCBs through the dust/air partition. Settled dust can adsorb PCBs from air. The sorption concentration was dependent on the congener concentration in the air and favored less volatile congeners. When the house dust was in direct contact with the PCB-containing panel, PCBs migrated into the dust at a much faster rate than the PCB transfer rate due to the dust/air partition. The dust/source partition was not significantly affected by the congener's volatility. For a given congener, the ratio between its concentration in the dust and in the source was used to estimate the dust/source partition coefficient. The estimated values ranged from 0.04 to 0.16. These values are indicative of the sink strength of the tested house dust being in the middle or lower-middle range. Published by Elsevier Ltd.

  6. DSOD Procedures for Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Howard, J. K.; Fraser, W. A.

    2005-12-01

    DSOD, which has jurisdiction over more than 1200 dams in California, routinely evaluates their dynamic stability using seismic shaking input ranging from simple pseudostatic coefficients to spectrally matched earthquake time histories. Our seismic hazard assessments assume maximum earthquake scenarios of nearest active and conditionally active seismic sources. Multiple earthquake scenarios may be evaluated depending on sensitivity of the design analysis (e.g., to certain spectral amplitudes, duration of shaking). Active sources are defined as those with evidence of movement within the last 35,000 years. Conditionally active sources are those with reasonable expectation of activity, which are treated as active until demonstrated otherwise. The Division's Geology Branch develops seismic hazard estimates using spectral attenuation formulas applicable to California. The formulas were selected, in part, to achieve a site response model similar to the 2000 IBC's for rock, soft rock, and stiff soil sites. The level of dynamic loading used in the stability analysis (50th, 67th, or 84th percentile ground shaking estimates) is determined using a matrix that considers consequence of dam failure and fault slip rate. We account for near-source directivity amplification along such faults by adjusting target response spectra and developing appropriate design earthquakes for analysis of structures sensitive to long-period motion. Based on in-house studies, the orientation of the dam analysis section relative to the fault-normal direction is considered for strike-slip earthquakes, but directivity amplification is assumed in any orientation for dip-slip earthquakes. We do not have probabilistic standards, but we evaluate the probability of our ground shaking estimates using hazard curves constructed from the USGS Interactive De-Aggregation website. Typically, return periods for our design loads exceed 1000 years. Excessive return periods may warrant a lower design load. Minimum shaking levels are provided for sites far from active faulting. Our procedures and standards are presented at the DSOD website http://damsafety.water.ca.gov/. We review our methods and tools periodically under the guidance of our Consulting Board for Earthquake Analysis (and expect to make changes pending NGA completion), mindful that frequent procedural changes can interrupt design evaluations.

  7. The HIV care cascade: a systematic review of data sources, methodology and comparability.

    PubMed

    Medland, Nicholas A; McMahon, James H; Chow, Eric P F; Elliott, Julian H; Hoy, Jennifer F; Fairley, Christopher K

    2015-01-01

    The cascade of HIV diagnosis, care and treatment (HIV care cascade) is increasingly used to direct and evaluate interventions to increase population antiretroviral therapy (ART) coverage, a key component of treatment as prevention. The ability to compare cascades over time, sub-population, jurisdiction or country is important. However, differences in data sources and methodology used to construct the HIV care cascade might limit its comparability and ultimately its utility. Our aim was to review systematically the different methods used to estimate and report the HIV care cascade and their comparability. A search of published and unpublished literature through March 2015 was conducted. Cascades that reported the continuum of care from diagnosis to virological suppression in a demographically definable population were included. Data sources and methods of measurement or estimation were extracted. We defined the most comparable cascade elements as those that directly measured diagnosis or care from a population-based data set. Thirteen reports were included after screening 1631 records. The undiagnosed HIV-infected population was reported in seven cascades, each of which used different data sets and methods and could not be considered to be comparable. All 13 used mandatory HIV diagnosis notification systems to measure the diagnosed population. Population-based data sets, derived from clinical data or mandatory reporting of CD4 cell counts and viral load tests from all individuals, were used in 6 of 12 cascades reporting linkage, 6 of 13 reporting retention, 3 of 11 reporting ART and 6 of 13 cascades reporting virological suppression. Cascades with access to population-based data sets were able to directly measure cascade elements and are therefore comparable over time, place and sub-population. Other data sources and methods are less comparable. To ensure comparability, countries wishing to accurately measure the cascade should utilize complete population-based data sets from clinical data from elements of a centralized healthcare setting, where available, or mandatory CD4 cell count and viral load test result reporting. Additionally, virological suppression should be presented both as percentage of diagnosed and percentage of estimated total HIV-infected population, until methods to calculate the latter have been standardized.

  8. Tracking Poverty Reduction in Bhutan: Income Deprivation Alongside Deprivation in Other Sources of Happiness

    ERIC Educational Resources Information Center

    Santos, Maria Emma

    2013-01-01

    This paper analyses poverty reduction in Bhutan between two points in time--2003 and 2007--from a multidimensional perspective. The measures estimated include consumption expenditure as well as other indicators which are directly (when possible) or indirectly associated to valuable functionings, namely, health, education, access to electricity,…

  9. Combined detection and strain typing of Yersinia enterocolitica directly from pork and poultry enrichments

    USDA-ARS?s Scientific Manuscript database

    Introduction: Yersinia enterocolitica is responsible for an estimated 98,000 cases of foodborne illness per year in the U.S. causing both intestinal and extraintestinal diseases. Its prevalence in retail pork and poultry, believed to the primary sources of these infections, ranges widely from 0 to 6...

  10. Continuous millennial decrease of the Earth's magnetic axial dipole

    NASA Astrophysics Data System (ADS)

    Poletti, Wilbor; Biggin, Andrew J.; Trindade, Ricardo I. F.; Hartmann, Gelvam A.; Terra-Nova, Filipe

    2018-01-01

    Since the establishment of direct estimations of the Earth's magnetic field intensity in the first half of the nineteenth century, a continuous decay of the axial dipole component has been observed and variously speculated to be linked to an imminent reversal of the geomagnetic field. Furthermore, indirect estimations from anthropologically made materials and volcanic derivatives suggest that this decrease began significantly earlier than direct measurements have been available. Here, we carefully reassess the available archaeointensity dataset for the last two millennia, and show a good correspondence between direct (observatory/satellite) and indirect (archaeomagnetic) estimates of the axial dipole moment creating, in effect, a proxy to expand our analysis back in time. Our results suggest a continuous linear decay as the most parsimonious long-term description of the axial dipole variation for the last millennium. We thus suggest that a break in the symmetry of axial dipole moment advective sources occurred approximately 1100 years earlier than previously described. In addition, based on the observed dipole secular variation timescale, we speculate that the weakening of the axial dipole may end soon.

  11. Estimating State-Specific Contributions to PM2.5- and O3-Related Health Burden from Residential Combustion and Electricity Generating Unit Emissions in the United States.

    PubMed

    Penn, Stefani L; Arunachalam, Saravanan; Woody, Matthew; Heiger-Bernays, Wendy; Tripodis, Yorghos; Levy, Jonathan I

    2017-03-01

    Residential combustion (RC) and electricity generating unit (EGU) emissions adversely impact air quality and human health by increasing ambient concentrations of fine particulate matter (PM 2.5 ) and ozone (O 3 ). Studies to date have not isolated contributing emissions by state of origin (source-state), which is necessary for policy makers to determine efficient strategies to decrease health impacts. In this study, we aimed to estimate health impacts (premature mortalities) attributable to PM 2.5 and O 3 from RC and EGU emissions by precursor species, source sector, and source-state in the continental United States for 2005. We used the Community Multiscale Air Quality model employing the decoupled direct method to quantify changes in air quality and epidemiological evidence to determine concentration-response functions to calculate associated health impacts. We estimated 21,000 premature mortalities per year from EGU emissions, driven by sulfur dioxide emissions forming PM 2.5 . More than half of EGU health impacts are attributable to emissions from eight states with significant coal combustion and large downwind populations. We estimate 10,000 premature mortalities per year from RC emissions, driven by primary PM 2.5 emissions. States with large populations and significant residential wood combustion dominate RC health impacts. Annual mortality risk per thousand tons of precursor emissions (health damage functions) varied significantly across source-states for both source sectors and all precursor pollutants. Our findings reinforce the importance of pollutant-specific, location-specific, and source-specific models of health impacts in design of health-risk minimizing emissions control policies. Citation: Penn SL, Arunachalam S, Woody M, Heiger-Bernays W, Tripodis Y, Levy JI. 2017. Estimating state-specific contributions to PM 2.5 - and O 3 -related health burden from residential combustion and electricity generating unit emissions in the United States. Environ Health Perspect 125:324-332; http://dx.doi.org/10.1289/EHP550.

  12. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, J.E.; Nichols, J.D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, lambda sub i provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in lambda hat sub i. We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in lambda hat sub i. These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for lambda sub i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) lambda hat prime sub i is the appropriate estimator.

  13. Directions of arrival estimation with planar antenna arrays in the presence of mutual coupling

    NASA Astrophysics Data System (ADS)

    Akkar, Salem; Harabi, Ferid; Gharsallah, Ali

    2013-06-01

    Directions of arrival (DoAs) estimation of multiple sources using an antenna array is a challenging topic in wireless communication. The DoAs estimation accuracy depends not only on the selected technique and algorithm, but also on the geometrical configuration of the antenna array used during the estimation. In this article the robustness of common planar antenna arrays against unaccounted mutual coupling is examined and their DoAs estimation capabilities are compared and analysed through computer simulations using the well-known MUltiple SIgnal Classification (MUSIC) algorithm. Our analysis is based on an electromagnetic concept to calculate an approximation of the impedance matrices that define the mutual coupling matrix (MCM). Furthermore, a CRB analysis is presented and used as an asymptotic performance benchmark of the studied antenna arrays. The impact of the studied antenna arrays geometry on the MCM structure is also investigated. Simulation results show that the UCCA has more robustness against unaccounted mutual coupling and performs better results than both UCA and URA geometries. The performed simulations confirm also that, although the UCCA achieves better performance under complicated scenarios, the URA shows better asymptotic (CRB) behaviour which promises more accuracy on DoAs estimation.

  14. An evaluation of three-dimensional photogrammetric and morphometric techniques for estimating volume and mass in Weddell seals Leptonychotes weddellii

    PubMed Central

    Ruscher-Hill, Brandi; Kirkham, Amy L.; Burns, Jennifer M.

    2018-01-01

    Body mass dynamics of animals can indicate critical associations between extrinsic factors and population vital rates. Photogrammetry can be used to estimate mass of individuals in species whose life histories make it logistically difficult to obtain direct body mass measurements. Such studies typically use equations to relate volume estimates from photogrammetry to mass; however, most fail to identify the sources of error between the estimated and actual mass. Our objective was to identify the sources of error that prevent photogrammetric mass estimation from directly predicting actual mass, and develop a methodology to correct this issue. To do this, we obtained mass, body measurements, and scaled photos for 56 sedated Weddell seals (Leptonychotes weddellii). After creating a three-dimensional silhouette in the image processing program PhotoModeler Pro, we used horizontal scale bars to define the ground plane, then removed the below-ground portion of the animal’s estimated silhouette. We then re-calculated body volume and applied an expected density to estimate animal mass. We compared the body mass estimates derived from this silhouette slice method with estimates derived from two other published methodologies: body mass calculated using photogrammetry coupled with a species-specific correction factor, and estimates using elliptical cones and measured tissue densities. The estimated mass values (mean ± standard deviation 345±71 kg for correction equation, 346±75 kg for silhouette slice, 343±76 kg for cones) were not statistically distinguishable from each other or from actual mass (346±73 kg) (ANOVA with Tukey HSD post-hoc, p>0.05 for all pairwise comparisons). We conclude that volume overestimates from photogrammetry are likely due to the inability of photo modeling software to properly render the ventral surface of the animal where it contacts the ground. Due to logistical differences between the “correction equation”, “silhouette slicing”, and “cones” approaches, researchers may find one technique more useful for certain study programs. In combination or exclusively, these three-dimensional mass estimation techniques have great utility in field studies with repeated measures sampling designs or where logistic constraints preclude weighing animals. PMID:29320573

  15. A phase coherence approach to estimating the spatial extent of earthquakes

    NASA Astrophysics Data System (ADS)

    Hawthorne, Jessica C.; Ampuero, Jean-Paul

    2016-04-01

    We present a new method for estimating the spatial extent of seismic sources. The approach takes advantage of an inter-station phase coherence computation that can identify co-located sources (Hawthorne and Ampuero, 2014). Here, however, we note that the phase coherence calculation can eliminate the Green's function and give high values only if both earthquakes are point sources---if their dimensions are much smaller than the wavelengths of the propagating seismic waves. By examining the decrease in coherence at higher frequencies (shorter wavelengths), we can estimate the spatial extents of the earthquake ruptures. The approach can to some extent be seen as a simple way of identifying directivity or variations in the apparent source time functions recorded at various stations. We apply this method to a set of well-recorded earthquakes near Parkfield, CA. We show that when the signal to noise ratio is high, the phase coherence remains high well above 50 Hz for closely spaced M<1.5 earthquake. The high-frequency phase coherence is smaller for larger earthquakes, suggesting larger spatial extents. The implied radii scale roughly as expected from typical magnitude-corner frequency scalings. We also examine a second source of high-frequency decoherence: spatial variation in the shape of the Green's functions. This spatial decoherence appears to occur on a similar wavelengths as the decoherence associated with the apparent source time functions. However, the variation in Green's functions can be normalized away to some extent by comparing observations at multiple components on a single station, which see the same apparent source time functions.

  16. High-resolution sampling and analysis of air particulate matter in the Pear River Delta region of Southern China: source apportionment and health risk assessment

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Day, P. K.; Wang, X.

    2017-12-01

    Hazardous air pollutants, such as trace elements in particulate matters (PM), are known or highly suspected to cause detrimental effects on human health. To understand the sources and associated risks of PM to human health, hourly time-integrated major trace elements in size-segregated coarse (PM10-2.5) and fine (PM2.5) particulate matter were collected and examined in an industrial city of Foshan in the Pearl River Delta region, China. Receptor modeling of the dataset by positive matrix factorization (PMF) was used to identify six sources contributing to PM2.5 and PM10 concentrations at the site. Dominant sources included industrial coal combustion, secondary inorganic aerosol, motor vehicles and construction dust along with two intermittent sources, biomass combustion and marine aerosol. The biomass combustion source was found to be a significant contributor to peak PM2.5 episodes along with motor vehicles and industrial coal combustion. Conditional probability function (CPF) was applied to estimate the local source effects from wind direction using the PMF-resolved source contribution coupled with the surface wind direction data. Health exposure risk for hazardous trace elements (Pb, As, Cr, Ni, Zn, V, Cu, Mn, Fe) and source-specific values were estimated. The total hazard quotient (total HQ =HI) of PM2.5 was 2.09, which is two times higher than the acceptable limit (HQ = 1). The total carcinogenic risk was 3.37*10-3 for PM2.5, which was three orders higher than the acceptable limit (i.e. 1.0*10-6). Among the selected trace elements, As and Pb posed the highest non-carcinogenic and carcinogenic risks for human health, respectively. In additional, our results showed that industrial coal combustion source was the dominant non-carcinogenic and carcinogenic risks contributor, highlighting the need for stringent control of this source. This study can provide new insight for policy makers to prioritize sources in air quality management and health risk reduction.

  17. Energy Partition and Variability of Earthquakes

    NASA Astrophysics Data System (ADS)

    Kanamori, H.

    2003-12-01

    During an earthquake the potential energy (strain energy + gravitational energy + rotational energy) is released, and the released potential energy (Δ W) is partitioned into radiated energy (ER), fracture energy (EG), and thermal energy (E H). How Δ W is partitioned into these energies controls the behavior of an earthquake. The merit of the slip-weakening concept is that only ER and EG control the dynamics, and EH can be treated separately to discuss the thermal characteristics of an earthquake. In general, if EG/E_R is small, the event is ``brittle", if EG /ER is large, the event is ``quasi static" or, in more common terms, ``slow earthquakes" or ``creep". If EH is very large, the event may well be called a thermal runaway rather than an earthquake. The difference in energy partition has important implications for the rupture initiation, evolution and excitation of long-period ground motions from very large earthquakes. We review the current state of knowledge on this problem in light of seismological observations and the basic physics of fracture. With seismological methods, we can measure only ER and the lower-bound of Δ W, Δ W0, and estimation of other energies involves many assumptions. ER: Although ER can be directly measured from the radiated waves, its determination is difficult because a large fraction of energy radiated at the source is attenuated during propagation. With the commonly used teleseismic and regional methods, only for events with MW>7 and MW>4, respectively, we can directly measure more than 10% of the total radiated energy. The rest must be estimated after correction for attenuation. Thus, large uncertainties are involved, especially for small earthquakes. Δ W0: To estimate Δ W0, estimation of the source dimension is required. Again, only for large earthquakes, the source dimension can be estimated reliably. With the source dimension, the static stress drop, Δ σ S, and Δ W0, can be estimated. EG: Seismologically, EG is the energy mechanically dissipated during faulting. In the context of the slip-weakening model, EG can be estimated from Δ W0 and ER. Alternatively, EG can be estimated from the laboratory data on the surface energy, the grain size and the total volume of newly formed fault gouge. This method suggests that, for crustal earthquakes, EG/E_R is very small, less than 0.2 even for extreme cases, for earthquakes with MW>7. This is consistent with the EG estimated with seismological methods, and the fast rupture speeds during most large earthquakes. For shallow subduction-zone earthquakes, EG/E_R varies substantially depending on the tectonic environments. EH: Direct estimation of EH is difficult. However, even with modest friction, EH can be very large, enough to melt or even dissociate a significant amount of material near the slip zone for large events with large slip, and the associated thermal effects may have significant effects on fault dynamics. The energy partition varies significantly for different types of earthquakes, e.g. large earthquakes on mature faults, large earthquakes on faults with low slip rates, subduction-zone earthquakes, deep focus earthquakes etc; this variability manifests itself in the difference in the evolution of seismic slip pattern. The different behaviors will be illustrated using the examples for large earthquakes, including, the 2001 Kunlun, the 1998 Balleny Is., the 1994 Bolivia, the 2001 India earthquake, the 1999 Chi-Chi, and the 2002 Denali earthquakes.

  18. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  19. The economic impacts of the September 11 terrorist attacks: a computable general equilibrium analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oladosu, Gbadebo A; Rose, Adam; Bumsoo, Lee

    This paper develops a bottom-up approach that focuses on behavioral responses in estimating the total economic impacts of the September 11, 2001, World Trade Center (WTC) attacks. The estimation includes several new features. First, is the collection of data on the relocation of firms displaced by the attack, the major source of resilience in muting the direct impacts of the event. Second, is a new estimate of the major source of impacts off-site -- the ensuing decline of air travel and related tourism in the U.S. due to the social amplification of the fear of terrorism. Third, the estimation ismore » performed for the first time using Computable General Equilibrium (CGE) analysis, including a new approach to reflecting the direct effects of external shocks. This modeling framework has many advantages in this application, such as the ability to include behavioral responses of individual businesses and households, to incorporate features of inherent and adaptive resilience at the level of the individual decision maker and the market, and to gauge quantity and price interaction effects across sectors of the regional and national economies. We find that the total business interruption losses from the WTC attacks on the U.S. economy were only slightly over $100 billion, or less than 1.0% of Gross Domestic Product. The impacts were only a loss of $14 billion of Gross Regional Product for the New York Metropolitan Area.« less

  20. Assessment of spatial discordance of primary and effective seed dispersal of European beech (Fagus sylvatica L.) by ecological and genetic methods.

    PubMed

    Millerón, M; López de Heredia, U; Lorenzo, Z; Alonso, J; Dounavi, A; Gil, L; Nanos, N

    2013-03-01

    Spatial discordance between primary and effective dispersal in plant populations indicates that postdispersal processes erase the seed rain signal in recruitment patterns. Five different models were used to test the spatial concordance of the primary and effective dispersal patterns in a European beech (Fagus sylvatica) population from central Spain. An ecological method was based on classical inverse modelling (SSS), using the number of seed/seedlings as input data. Genetic models were based on direct kernel fitting of mother-to-offspring distances estimated by a parentage analysis or were spatially explicit models based on the genotype frequencies of offspring (competing sources model and Moran-Clark's Model). A fully integrated mixed model was based on inverse modelling, but used the number of genotypes as input data (gene shadow model). The potential sources of error and limitations of each seed dispersal estimation method are discussed. The mean dispersal distances for seeds and saplings estimated with these five methods were higher than those obtained by previous estimations for European beech forests. All the methods show strong discordance between primary and effective dispersal kernel parameters, and for dispersal directionality. While seed rain was released mostly under the canopy, saplings were established far from mother trees. This discordant pattern may be the result of the action of secondary dispersal by animals or density-dependent effects; that is, the Janzen-Connell effect. © 2013 Blackwell Publishing Ltd.

  1. Direct and indirect atmospheric deposition of PCBs to the Delaware River watershed.

    PubMed

    Totten, Lisa A; Panangadan, Maya; Eisenreich, Steven J; Cavallo, Gregory J; Fikslin, Thomas J

    2006-04-01

    Atmospheric deposition can be an important source of PCBs to aquatic ecosystems. To develop the total maximum daily load (TMDL) for polychlorinated biphenyls (PCBs) for the tidal Delaware River (water-quality Zones 2-5), estimates of the loading of PCBs to the river from atmospheric deposition were generated from seven air-monitoring sites along the river. This paper presents the atmospheric PCB data from these sites, estimates direct atmospheric deposition fluxes, and assesses the importance of atmospheric deposition relative to other sources of PCBs to the river. Also, the relationship between indirect atmospheric deposition and PCB loads from minor tributaries to the Delaware River is discussed. Data from these sites revealed high atmospheric PCB concentrations in the Philadelphia/Camden urban area and lower regional background concentrations in the more remote areas. Wet, dry particle, and gaseous absorption deposition are estimated to contribute about 0.6, 1.8, and 6.5 kg year-(-1) sigmaPCBs to the River, respectively, exceeding the TMDL of 0.139 kg year(-1) by more than an order of magnitude. Penta-PCB watershed fluxes were obtained by dividing the tributary loads by the watershed area. The lowest of these watershed fluxes are less than approximately 1 ng m(-2) day(-1) for penta-PCB and probably indicates pristine watersheds in which PCB loads are dominated by atmospheric deposition. In these watersheds, the pass-through efficiency of PCBs is estimated to be on the order of 1%.

  2. Localization of short-range acoustic and seismic wideband sources: Algorithms and experiments

    NASA Astrophysics Data System (ADS)

    Stafsudd, J. Z.; Asgari, S.; Hudson, R.; Yao, K.; Taciroglu, E.

    2008-04-01

    We consider the determination of the location (source localization) of a disturbance source which emits acoustic and/or seismic signals. We devise an enhanced approximate maximum-likelihood (AML) algorithm to process data collected at acoustic sensors (microphones) belonging to an array of, non-collocated but otherwise identical, sensors. The approximate maximum-likelihood algorithm exploits the time-delay-of-arrival of acoustic signals at different sensors, and yields the source location. For processing the seismic signals, we investigate two distinct algorithms, both of which process data collected at a single measurement station comprising a triaxial accelerometer, to determine direction-of-arrival. The direction-of-arrivals determined at each sensor station are then combined using a weighted least-squares approach for source localization. The first of the direction-of-arrival estimation algorithms is based on the spectral decomposition of the covariance matrix, while the second is based on surface wave analysis. Both of the seismic source localization algorithms have their roots in seismology; and covariance matrix analysis had been successfully employed in applications where the source and the sensors (array) are typically separated by planetary distances (i.e., hundreds to thousands of kilometers). Here, we focus on very-short distances (e.g., less than one hundred meters) instead, with an outlook to applications in multi-modal surveillance, including target detection, tracking, and zone intrusion. We demonstrate the utility of the aforementioned algorithms through a series of open-field tests wherein we successfully localize wideband acoustic and/or seismic sources. We also investigate a basic strategy for fusion of results yielded by acoustic and seismic arrays.

  3. Measurements of particles in the 5-1000 nm range close to road level in an urban street canyon.

    PubMed

    Kumar, Prashant; Fennell, Paul; Britter, Rex

    2008-02-15

    A newly developed instrument, the 'fast response differential mobility spectrometer (DMS500)', was deployed to measure the particles in the 5-1000 nm range in a Cambridge (UK) street canyon. Measurements were taken for 7 weekdays (from 09:00 to 19:00 h) between 8 and 21 June 2006 at three heights close to the road level (i.e. 0.20 m, 1.0 m and 2.60 m). The main aims of the measurements were to investigate the dependence of particle number distributions (PNDs) and concentrations (PNCs) and their vertical variations on wind speed, wind direction, traffic volume, and to estimate the particle number flux (PNF) and the particle number emission factors (PNEF) for typical urban streets and driving conditions. Traffic was the main source of particles at the measurement site. Measured PNCs were inversely proportional to the reference wind speed and directly proportional to the traffic volume. During the periods of cross-canyon flow the PNCs were larger on the leeward side than the windward side of the street canyon showing a possible effect of the vortex circulation. The largest PNCs were unsurprisingly near to road level and the pollution sources. The PNCs measured at 0.20 m and 1.0 m were the same to within 0.5-12.5% indicating a well-mixed region and this was presumably due to the enhanced mixing from traffic produced turbulence. The PNCs at 2.60 m were lower by 10-40% than those at 0.20 m and 1.0 m, suggesting a possible concentration gradient in the upper part of the canyon. The PNFs were estimated using an idealised and an operational approach; they were directly proportional to the traffic volume confirming the traffic to be the main source of particles. The PNEF were estimated using an inverse modelling technique; the reported values were within a factor of 3 of those published in similar studies.

  4. Geometric Characterization of Multi-Axis Multi-Pinhole SPECT

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574

  5. SP Response to a Line Source Infiltration for Characterizing the Vadose Zone: Forward Modeling and Inversion

    NASA Astrophysics Data System (ADS)

    Sailhac, P.

    2004-05-01

    Field estimation of soil water flux has direct application for water resource management. Standard hydrologic methods like tensiometry or TDR are often difficult to apply because of the heterogeneity of the subsurface, and non invasive tools like ERT, NMR or GPR are limited to the estimation of the water content. Electrical Streaming Potential (SP) monitoring can provide a cost-effective tool to help estimate the nature of the hydraulic transfers (infiltration or evaporation) in the vadose zone. Indeed this technique has improved during the last decade and has been shown to be a useful tool for quantitative groundwater flow characterization (see the poster of Marquis et al. for a review). We now account for our latest development on the possibility of using SP for estimating hydraulic parameters of unsaturated soils from in situ SP measurements during infiltration experiments. The proposed method consists in SP profiling perpendicularly to a line source of steady-state infiltration. Analytic expressions for the forward modeling show a sensitivity to six parameters: the electrokinetic coupling parameter at saturation CS, the soil sorptive number α , the ratio of the constant source strength to the hydraulic conductivity at saturation q/KS, the soil effective water saturation prior to the infiltration experiment Se0, Mualem parameter m, and Archie law exponent n. In applications, all these parameters could be constrained by inverting electrokinetic data obtained during a series of infiltration experiments with varying source strength q.

  6. Size distribution, directional source contributions and pollution status of PM from Chengdu, China during a long-term sampling campaign.

    PubMed

    Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G

    2017-06-01

    Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.

  7. Uncertainty in Estimates of Net Seasonal Snow Accumulation on Glaciers from In Situ Measurements

    NASA Astrophysics Data System (ADS)

    Pulwicki, A.; Flowers, G. E.; Radic, V.

    2017-12-01

    Accurately estimating the net seasonal snow accumulation (or "winter balance") on glaciers is central to assessing glacier health and predicting glacier runoff. However, measuring and modeling snow distribution is inherently difficult in mountainous terrain, resulting in high uncertainties in estimates of winter balance. Our work focuses on uncertainty attribution within the process of converting direct measurements of snow depth and density to estimates of winter balance. We collected more than 9000 direct measurements of snow depth across three glaciers in the St. Elias Mountains, Yukon, Canada in May 2016. Linear regression (LR) and simple kriging (SK), combined with cross correlation and Bayesian model averaging, are used to interpolate estimates of snow water equivalent (SWE) from snow depth and density measurements. Snow distribution patterns are found to differ considerably between glaciers, highlighting strong inter- and intra-basin variability. Elevation is found to be the dominant control of the spatial distribution of SWE, but the relationship varies considerably between glaciers. A simple parameterization of wind redistribution is also a small but statistically significant predictor of SWE. The SWE estimated for one study glacier has a short range parameter (90 m) and both LR and SK estimate a winter balance of 0.6 m w.e. but are poor predictors of SWE at measurement locations. The other two glaciers have longer SWE range parameters ( 450 m) and due to differences in extrapolation, SK estimates are more than 0.1 m w.e. (up to 40%) lower than LR estimates. By using a Monte Carlo method to quantify the effects of various sources of uncertainty, we find that the interpolation of estimated values of SWE is a larger source of uncertainty than the assignment of snow density or than the representation of the SWE value within a terrain model grid cell. For our study glaciers, the total winter balance uncertainty ranges from 0.03 (8%) to 0.15 (54%) m w.e. depending primarily on the interpolation method. Despite the challenges associated with accurately and precisely estimating winter balance, our results are consistent with the previously reported regional accumulation gradient.

  8. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  9. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  10. Seismic source models for very-long period seismic signals on White Island, New Zealand

    NASA Astrophysics Data System (ADS)

    Jiwani-Brown, Elliot; Neuberg, Jurgen; Jolly, Art

    2015-04-01

    Very-long-period seismic signals (VLP) from White Island have a duration of only a few tens of seconds and a waveform that indicates an elastic (or viscoelastic) interaction of a source region with the surrounding medium; unlike VLP signals on some other volcanoes that indicate a step function recorded in the near field of the seismic source, White Island VLPs exhibit a Ricker waveform. We explore a set of isotropic, seismic source models based on the interaction between magma and water/brine in direct contact. Seismic amplitude measurements are taken into account to estimate the volume changes at depth that can produce the observed displacement at the surface. Furthermore, the influence of different fluid types are explored.

  11. Experimental Analysis of Pseudospark Sourced Electron Beam

    NASA Astrophysics Data System (ADS)

    Kumar, Niraj; Pal, U. N.; Verma, D. K.; Prajapati, J.; Kumar, M.; Meena, B. L.; Tyagi, M. S.; Srivastava, V.

    2011-12-01

    The pseudospark (PS) discharge has been shown to be a promising source of high brightness, high intensity electron beam pulses. The PS discharge sourced electron beam has potential applications in plasma filled microwave sources where normal material cathode cannot be used. Analysis of the electron beam profile has been done experimentally for different applied voltages. The investigation has been carried out at different axial and radial location inside the drift space in argon atmosphere. This paper represents experimentally found axial and radial variation of the beam current inside the drift tube of PS discharge based plasma cathode electron (PCE) gun. With the help of current density estimation the focusing and defocusing point of electron beam in axial direction can be analyzed.

  12. The effect of directivity in a PSHA framework

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2012-09-01

    We propose a method to introduce a refined representation of the ground motion in the framework of the Probabilistic Seismic Hazard Analysis (PSHA). This study is especially oriented to the incorporation of a priori information about source parameters, by focusing on the directivity effect and its influence on seismic hazard maps. Two strategies have been followed. One considers the seismic source as an extended source, and it is valid when the PSHA seismogenetic sources are represented as fault segments. We show that the incorporation of variables related to the directivity effect can lead to variations up to 20 per cent of the hazard level in case of dip-slip faults with uniform distribution of hypocentre location, in terms of spectral acceleration response at 5 s, exceeding probability of 10 per cent in 50 yr. The second one concerns the more general problem of the seismogenetic areas, where each point is a seismogenetic source having the same chance of enucleate a seismic event. In our proposition the point source is associated to the rupture-related parameters, defined using a statistical description. As an example, we consider a source point of an area characterized by strike-slip faulting style. With the introduction of the directivity correction the modulation of the hazard map reaches values up to 100 per cent (for strike-slip, unilateral faults). The introduction of directivity does not increase uniformly the hazard level, but acts more like a redistribution of the estimation that is consistent with the fault orientation. A general increase appears only when no a priori information is available. However, nowadays good a priori knowledge exists on style of faulting, dip and orientation of faults associated to the majority of the seismogenetic zones of the present seismic hazard maps. The percentage of variation obtained is strongly dependent on the type of model chosen to represent analytically the directivity effect. Therefore, it is our aim to emphasize more on the methodology following which, all the information collected may be easily converted to obtain a more comprehensive and meaningful probabilistic seismic hazard formulation.

  13. Direct Emissivity Measurements of Painted Metals for Improved Temperature Estimation During Laser Damage Testing

    DTIC Science & Technology

    2014-03-27

    Source The laser probe in use for this test is a Daylight Solutions Unicorn II quantum cascade laser operating at 3.77 µm. According to the laser...San Diego, CA, Spec Sheet: Unicorn II Fixed-Wavelength Mid-IR External Cavity Lasers. 51 REPORT DOCUMENTATION PAGE Form ApprovedOMB No. 0704–0188 The

  14. Use of Direct and Indirect Estimates of Crown Dimensions to Predict One Seed Juniper Woody Biomass Yield for Alternative Energy Uses

    USDA-ARS?s Scientific Manuscript database

    Throughout the western United States there is increased interest in utilizing woodland biomass as an alternative energy source. We conducted a pilot study to predict one seed juniper (Juniperus monosperma) chip yield from tree-crown dimensions measured on the ground or derived from Very Large Scale ...

  15. Estimating daily Landsat-scale evapotranspiration over a managed pine plantation in North Carolina, USA using a data fusion method

    USDA-ARS?s Scientific Manuscript database

    As a primary flux in the global water cycle, evapotranspiration (ET) connects hydrologic and biological processes and is directly affected by water management, land use change and climate change. The two source energy balance (TSEB) model has been widely applied to quantify field scale ET using sate...

  16. Daily Landsat-scale evapotranspiration estimation over a managed pine plantation in North Carolina, USA using multi-satellite data fusion

    USDA-ARS?s Scientific Manuscript database

    As a primary flux in the global water cycle, evapotranspiration (ET) connects hydrologic and biological processes and is directly affected by water and land management, land use change and climate variability. The Two Source Energy Balance (TSEB) model has been widely applied to quantify field- to g...

  17. Educating Foreign Students in the U.S.A.: A Cost Benefit Analysis.

    ERIC Educational Resources Information Center

    Mehrabi, Shah M.

    The economic costs and benefits of educating foreign students in U.S. public and private colleges are estimated. U.S. costs of educating foreign students consist primarily of: (1) direct educational costs, (2) cost of the foreign students who receive their maintenance allowance from U.S. sources, (3) travel costs of those foreign students whose…

  18. Atmospheric Response And Feedback To Smoke Radiative Forcing From Wildland Fires

    Treesearch

    Yongqiang Liu

    2003-01-01

    Smoke from wildland fires is one of the sources of atmospheric anthropogenic aerosols. it can dramatically affect regional and global radiative balance. Ross et al. (1998) estimated a direct radiative forcing of nearly -20 Wm-2 for the 1995 Amazonian smoke season (August and September). Penner et al. (1992) indicated that the magnitude of the...

  19. Satellite-derived methane hotspot emission estimates using a fast data-driven method

    NASA Astrophysics Data System (ADS)

    Buchwitz, Michael; Schneising, Oliver; Reuter, Maximilian; Heymann, Jens; Krautwurst, Sven; Bovensmann, Heinrich; Burrows, John P.; Boesch, Hartmut; Parker, Robert J.; Somkuti, Peter; Detmers, Rob G.; Hasekamp, Otto P.; Aben, Ilse; Butz, André; Frankenberg, Christian; Turner, Alexander J.

    2017-05-01

    Methane is an important atmospheric greenhouse gas and an adequate understanding of its emission sources is needed for climate change assessments, predictions, and the development and verification of emission mitigation strategies. Satellite retrievals of near-surface-sensitive column-averaged dry-air mole fractions of atmospheric methane, i.e. XCH4, can be used to quantify methane emissions. Maps of time-averaged satellite-derived XCH4 show regionally elevated methane over several methane source regions. In order to obtain methane emissions of these source regions we use a simple and fast data-driven method to estimate annual methane emissions and corresponding 1σ uncertainties directly from maps of annually averaged satellite XCH4. From theoretical considerations we expect that our method tends to underestimate emissions. When applying our method to high-resolution atmospheric methane simulations, we typically find agreement within the uncertainty range of our method (often 100 %) but also find that our method tends to underestimate emissions by typically about 40 %. To what extent these findings are model dependent needs to be assessed. We apply our method to an ensemble of satellite XCH4 data products consisting of two products from SCIAMACHY/ENVISAT and two products from TANSO-FTS/GOSAT covering the time period 2003-2014. We obtain annual emissions of four source areas: Four Corners in the south-western USA, the southern part of Central Valley, California, Azerbaijan, and Turkmenistan. We find that our estimated emissions are in good agreement with independently derived estimates for Four Corners and Azerbaijan. For the Central Valley and Turkmenistan our estimated annual emissions are higher compared to the EDGAR v4.2 anthropogenic emission inventory. For Turkmenistan we find on average about 50 % higher emissions with our annual emission uncertainty estimates overlapping with the EDGAR emissions. For the region around Bakersfield in the Central Valley we find a factor of 5-8 higher emissions compared to EDGAR, albeit with large uncertainty. Major methane emission sources in this region are oil/gas and livestock. Our findings corroborate recently published studies based on aircraft and satellite measurements and new bottom-up estimates reporting significantly underestimated methane emissions of oil/gas and/or livestock in this area in EDGAR.

  20. Delineation and hydrologic effects of a gasoline leak at Stovepipe Wells Hotel, Death Valley National Monument, California

    USGS Publications Warehouse

    Buono, A.; Packard, Elaine M.

    1982-01-01

    Ground water is the only local source of water available to the Stovepipe Wells Hotel facilities of the Death Valley National Monument, California. A leak in a service station storage tank caused the formation of a gasoline layer overlying the water table, creating the potential for contamination of the water supply. The maximum horizontal extent of the gasoline layer was mathematically estimated to be 1,300 feet downgradient from the leaky gasoline tank. Exploratory drilling detected the gasoline layer between 900 and 1,400 feet downgradient and between 50 and 150 feet upgradient from the source. Traces of the soluble components of gasoline were also found in the aquifer 150 feet upgradient, and 250 feet distant from the source perpendicular to the direction of ground-water movement. The gasoline spill is not likely to have an effect on the supply wells located 0.4 mile south of the leak source, which is nearly perpendicular to the direction of ground-water movement and the primary direction of gasoline movement in the area. No effect on phreatophytes 2 miles downgradient from the layer is likely, but the potential effects of gasoline vapors within the unsaturated zone on local xerophytes are not known. (USGS)

  1. Low-frequency Target Strength and Abundance of Shoaling Atlantic Herring (Clupea harengus) in the Gulf of Maine during the Ocean Acoustic Waveguide Remote Sensing 2006 Experiment

    DTIC Science & Technology

    2010-01-01

    the northern flank of Georges Bank from east to west. As a result, annual stock estimates may be highly aliased in both time and space. One of the...transmitted signals from the source array for transmission loss and source level calibrations. Two calibrated acoustic targets made of air- filled rubber...region to the north is comprised of over 70106 individuals. Concurrent localized imaging of fish aggregations at OAWRS- directed locations was

  2. Contribution of Changing Sources and Sinks to the Growth Rate of Atmospheric Methane Concentrations for the Last Two Decades

    NASA Technical Reports Server (NTRS)

    Matthews, Elaine; Walter, B.; Bogner, J.; Sarma, D.; Portmey, G.; Travis, Larry (Technical Monitor)

    2001-01-01

    In situ measurements of atmospheric methane concentrations begun in the early 1980s show decadal trends, as well as large interannual variations, in growth rate. Recent research indicates that while wetlands can explain several of the large growth anomalies for individual years, the decadal trend may be the combined effect of increasing sinks, due to increases in tropospheric OH, and stabilizing sources. We discuss new 20-year histories of annual, global source strengths for all major methane sources, i.e., natural wetlands, rice cultivation, ruminant animals, landfills, fossil fuels, and biomass burning. We also present estimates of the temporal pattern of the sink required to reconcile these sources and atmospheric concentrations over this time period. Analysis of the individual emission sources, together with model-derived estimates of the OH sink strength, indicates that the growth rate of atmospheric methane observed over the last 20 years can only be explained by a combination of changes in source emissions and an increasing tropospheric sink. Direct validation of the global sources and the terrestrial sink is not straightforward, in part because some sources/sinks are relatively small and diffuse (e.g., landfills and soil consumption), as well as because the atmospheric record integrates multiple and substantial sources and tropospheric sinks in regions such as the tropics. We discuss ways to develop and test criteria for rejecting and/or accepting a suite of scenarios for the methane budget.

  3. Uncertainties associated with parameter estimation in atmospheric infrasound arrays.

    PubMed

    Szuberla, Curt A L; Olson, John V

    2004-01-01

    This study describes a method for determining the statistical confidence in estimates of direction-of-arrival and trace velocity stemming from signals present in atmospheric infrasound data. It is assumed that the signal source is far enough removed from the infrasound sensor array that a plane-wave approximation holds, and that multipath and multiple source effects are not present. Propagation path and medium inhomogeneities are assumed not to be known at the time of signal detection, but the ensemble of time delays of signal arrivals between array sensor pairs is estimable and corrupted by uncorrelated Gaussian noise. The method results in a set of practical uncertainties that lend themselves to a geometric interpretation. Although quite general, this method is intended for use by analysts interpreting data from atmospheric acoustic arrays, or those interested in designing and deploying them. The method is applied to infrasound arrays typical of those deployed as a part of the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty Organization.

  4. Chiral perturbation theory and nucleon-pion-state contaminations in lattice QCD

    NASA Astrophysics Data System (ADS)

    Bär, Oliver

    2017-05-01

    Multiparticle states with additional pions are expected to be a non-negligible source of excited-state contamination in lattice simulations at the physical point. It is shown that baryon chiral perturbation theory can be employed to calculate the contamination due to two-particle nucleon-pion-states in various nucleon observables. Leading order results are presented for the nucleon axial, tensor and scalar charge and three Mellin moments of parton distribution functions (quark momentum fraction, helicity and transversity moment). Taking into account phenomenological results for the charges and moments the impact of the nucleon-pion-states on lattice estimates for these observables can be estimated. The nucleon-pion-state contribution results in an overestimation of all charges and moments obtained with the plateau method. The overestimation is at the 5-10% level for source-sink separations of about 2 fm. The source-sink separations accessible in contemporary lattice simulations are found to be too small for chiral perturbation theory to be directly applicable.

  5. Microbial Source Module (MSM): Documenting the Science ...

    EPA Pesticide Factsheets

    The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consumed and produced by the MSM which is based on the HSPF (Bicknell et al., 1997) Bacterial Indicator Tool (EPA, 2013b, 2013c). Non-point sources include numbers, locations, and shedding rates of domestic agricultural animals (dairy and beef cows, swine, poultry, etc.) and wildlife (deer, duck, raccoon, etc.). Monthly maximum microbial storage and accumulation rates on the land surface, adjusted for die-off, are computed over an entire season for four land-use types (cropland, pasture, forest, and urbanized/mixed-use) for each subwatershed. Monthly point source microbial loadings to instream locations (i.e., stream segments that drain individual sub-watersheds) are combined and determined for septic systems, direct instream shedding by cattle, and POTWs/WWTPs (Publicly Owned Treatment Works/Wastewater Treatment Plants). The MSM functions within a larger modeling system that characterizes human-health risk resulting from ingestion of water contaminated with pathogens. The loading estimates produced by the MSM are input to the HSPF model that simulates flow and microbial fate/transport within a watershed. Microbial counts within recreational waters are then input to the MRA-IT model (Soller et

  6. The role of local populations within a landscape context: Defining and classifying sources and sinks

    USGS Publications Warehouse

    Runge, J.P.; Runge, M.C.; Nichols, J.D.

    2006-01-01

    The interaction of local populations has been the focus of an increasing number of studies in the past 30 years. The study of source-sink dynamics has especially generated much interest. Many of the criteria used to distinguish sources and sinks incorporate the process of apparent survival (i.e., the combined probability of true survival and site fidelity) but not emigration. These criteria implicitly treat emigration as mortality, thus biasing the classification of sources and sinks in a manner that could lead to flawed habitat management. Some of the same criteria require rather restrictive assumptions about population equilibrium that, when violated, can also generate misleading inference. Here, we expand on a criterion (denoted ?contribution? or Cr) that incorporates successful emigration in differentiating sources and sinks and that makes no restrictive assumptions about dispersal or equilibrium processes in populations of interest. The metric Cr is rooted in the theory of matrix population models, yet it also contains clearly specified parameters that have been estimated in previous empirical research. We suggest that estimates of emigration are important for delineating sources and sinks and, more generally, for evaluating how local populations interact to generate overall system dynamics. This suggestion has direct implications for issues such as species conservation and habitat management.

  7. Modeling the direction-continuous time-of-arrival in head-related transfer functions

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr

    2015-01-01

    Head-related transfer functions (HRTFs) describe the filtering of the incoming sound by the torso, head, and pinna. As a consequence of the propagation path from the source to the ear, each HRTF contains a direction-dependent, broadband time-of-arrival (TOA). TOAs are usually estimated independently for each direction from HRTFs, a method prone to artifacts and limited by the spatial sampling. In this study, a continuous-direction TOA model combined with an outlier-removal algorithm is proposed. The model is based on a simplified geometric representation of the listener, and his/her arbitrary position within the HRTF measurement. The outlier-removal procedure uses the extreme studentized deviation test to remove implausible TOAs. The model was evaluated for numerically calculated HRTFs of sphere, torso, and pinna under various conditions. The accuracy of estimated parameters was within the resolution given by the sampling rate. Applied to acoustically measured HRTFs of 172 listeners, the estimated parameters were consistent with realistic listener geometry. The outlier removal further improved the goodness-of-fit, particularly for some problematic fits. The comparison with a simpler model that fixed the listener position to the center of the measurement geometry showed a clear advantage of listener position as an additional free model parameter. PMID:24606268

  8. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  9. Magnetic, in situ, mineral characterization of Chelyabinsk meteorite thin section

    NASA Astrophysics Data System (ADS)

    Nabelek, Ladislav; Mazanec, Martin; Kdyr, Simon; Kletetschka, Gunther

    2015-06-01

    Magnetic images of Chelyabinsk meteorite's (fragment F1 removed from Chebarkul lake) thin section have been unraveled by a magnetic scanning system from Youngwood Science and Engineering (YSE) capable of resolving magnetic anomalies down to 10-3 mT range from about 0.3 mm distance between the probe and meteorite surface (resolution about 0.15 mm). Anomalies were produced repeatedly, each time after application of magnetic field pulse of varying amplitude and constant, normal or reversed, direction. This process resulted in both magnetizing and demagnetizing of the meteorite thin section, while keeping the magnetization vector in the plane of the thin section. Analysis of the magnetic data allows determination of coercivity of remanence (Bcr) for the magnetic sources in situ. Value of Bcr is critical for calculating magnetic forces applicable during missions to asteroids where gravity is compromised. Bcr was estimated by two methods. First method measured varying dipole magnetic field strength produced by each anomaly in the direction of magnetic pulses. Second method measured deflections of the dipole direction from the direction of magnetic pulses. Bcr of magnetic sources in Chelyabinsk meteorite ranges between 4 and 7 mT. These magnetic sources enter their saturation states when applying 40 mT external magnetic field pulse.

  10. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  11. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  12. Estimating sedimentation rates and sources in a partially urbanized catchment using caesium-137

    NASA Astrophysics Data System (ADS)

    Ormerod, L. M.

    1998-06-01

    While there has been increased interest in determining sedimentation rates and sources in agricultural and forested catchments in recent years, there have been few studies dealing with urbanized catchments. A study of sedimentation rates and sources within channel and floodplain deposits of a partially urbanized catchment has been undertaken using the 137Cs technique. Results for sedimentation rates showed no particular downstream pattern. This may be partially explained by underestimation of sedimentation rates at some sites by failure to sample the full 137Cs profile, floodplain erosion and deliberate removal of sediment. Evidence of lateral increases in net sedimentation rates with distance from the channel may be explained by increased floodplain erosion at sites closer to the channel and floodplain formation by lateral deposition. Potential sediment sources for the catchment were considered to be forest topsoil, subsurface material and sediments derived from urban areas, which were found to be predominantly subsurface material. Tracing techniques showed an increase in subsurface material for downstream sites, confirming expectations that subsurface material would increase in the downstream direction in response to the direct and indirect effects of urbanization.

  13. Accuracy of a simplified method for shielded gamma-ray skyshine sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bassett, M.S.; Shultis, J.K.

    1989-11-01

    Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less

  14. A Subspace Pursuit–based Iterative Greedy Hierarchical Solution to the Neuromagnetic Inverse Problem

    PubMed Central

    Babadi, Behtash; Obregon-Henao, Gabriel; Lamus, Camilo; Hämäläinen, Matti S.; Brown, Emery N.; Purdon, Patrick L.

    2013-01-01

    Magnetoencephalography (MEG) is an important non-invasive method for studying activity within the human brain. Source localization methods can be used to estimate spatiotemporal activity from MEG measurements with high temporal resolution, but the spatial resolution of these estimates is poor due to the ill-posed nature of the MEG inverse problem. Recent developments in source localization methodology have emphasized temporal as well as spatial constraints to improve source localization accuracy, but these methods can be computationally intense. Solutions emphasizing spatial sparsity hold tremendous promise, since the underlying neurophysiological processes generating MEG signals are often sparse in nature, whether in the form of focal sources, or distributed sources representing large-scale functional networks. Recent developments in the theory of compressed sensing (CS) provide a rigorous framework to estimate signals with sparse structure. In particular, a class of CS algorithms referred to as greedy pursuit algorithms can provide both high recovery accuracy and low computational complexity. Greedy pursuit algorithms are difficult to apply directly to the MEG inverse problem because of the high-dimensional structure of the MEG source space and the high spatial correlation in MEG measurements. In this paper, we develop a novel greedy pursuit algorithm for sparse MEG source localization that overcomes these fundamental problems. This algorithm, which we refer to as the Subspace Pursuit-based Iterative Greedy Hierarchical (SPIGH) inverse solution, exhibits very low computational complexity while achieving very high localization accuracy. We evaluate the performance of the proposed algorithm using comprehensive simulations, as well as the analysis of human MEG data during spontaneous brain activity and somatosensory stimuli. These studies reveal substantial performance gains provided by the SPIGH algorithm in terms of computational complexity, localization accuracy, and robustness. PMID:24055554

  15. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less

  16. [Perception of approaching and withdrawing sound sources following exposure to broadband noise. The effect of spatial domain].

    PubMed

    Malinina, E S

    2014-01-01

    The spatial specificity of auditory aftereffect was studied after a short-time adaptation (5 s) to the broadband noise (20-20000 Hz). Adapting stimuli were sequences of noise impulses with the constant amplitude, test stimuli--with the constant and changing amplitude: an increase of amplitude of impulses in sequence was perceived by listeners as approach of the sound source, while a decrease of amplitude--as its withdrawal. The experiments were performed in an anechoic chamber. The auditory aftereffect was estimated under the following conditions: the adapting and test stimuli were presented from the loudspeaker located at a distance of 1.1 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively near spatial domain) or 4.5 m from the listeners (the subjectively far spatial domain); the adapting and test stimuli were presented from different distances. The obtained data showed that perception of the imitated movement of the sound source in both spatial domains had the common characteristic peculiarities that manifested themselves both under control conditions without adaptation and after adaptation to noise. In the absence of adaptation for both distances, an asymmetry of psychophysical curves was observed: the listeners estimated the test stimuli more often as approaching. The overestimation by listeners of test stimuli as the approaching ones was more pronounced at their presentation from the distance of 1.1 m, i. e., from the subjectively near spatial domain. After adaptation to noise the aftereffects showed spatial specificity in both spatial domains: they were observed only at the spatial coincidence of adapting and test stimuli and were absent at their separation. The aftereffects observed in two spatial domains were similar in direction and value: the listeners estimated the test stimuli more often as withdrawing as compared to control. The result of such aftereffect was restoration of the symmetry of psychometric curves and of the equiprobable estimation of direction of movement of test signals.

  17. Modeling the Influence of Hemispheric Transport on Trends in ...

    EPA Pesticide Factsheets

    We describe the development and application of the hemispheric version of the CMAQ to examine the influence of long-range pollutant transport on trends in surface level O3 distributions. The WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations were recently performed for the period spanning 1990-2010 to examine changes in hemispheric air pollution resulting from changes in emissions over this period. Simulated trends in ozone and precursor species concentrations across the U.S. and the northern hemisphere over the past two decades are compared with those inferred from available measurements during this period. Additionally, the decoupled direct method (DDM) in CMAQ is used to estimate the sensitivity of O3 to emissions from different source regions across the northern hemisphere. The seasonal variations in source region contributions to background O3 is then estimated from these sensitivity calculations and will be discussed. A reduced form model combining these source region sensitivities estimated from DDM with the multi-decadal simulations of O3 distributions and emissions trends, is then developed to characterize the changing contributions of different source regions to background O3 levels across North America. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas

  18. Analysis and prediction of ocean swell using instrumented buoys

    NASA Technical Reports Server (NTRS)

    Mettlach, Theodore; Wang, David; Wittmann, Paul

    1994-01-01

    During the period 20-23 September 1990, the remnants of Supertyphoon Flo moved into the central North Pacific Ocean with sustained wind speeds of 28 m/s. The strong wind and large fetch area associated with this storm generated long-period swell that propagated to the west coast of North America. National Data Buoy Center moored-buoy stations, located in a network that ranged from the Gulf of Alaska to the California Bight, provided wave spectral estimates of the swell from this storm. The greatest dominant wave periods measured were approximately 20-25 s, and significant wave heights measured ranged from 3 to 8 m. Wave spectra from an array of three nondirectional buoys are used to find the source of the long-period swell. Directional wave spectra from a heave-pitch-roll buoy are also used to make an independent estimate of the source of the swell. The ridge-line method, using time-frequency contour plots of wave spectral energy density, is used to determine the time of swell generation, which is used with the appropriate surface pressure analysis to infer the swell generation area. The diagnosed sources of the swell are also compared with nowcasts from the Global Spectral Ocean Wave Model of the Fleet Numerical Oceanography Center. A simple method of predicting the propagation of ocean swell, by applying a simple kinematic model of wave propagation to the estimated point and time source, is demonstrated.

  19. Compressed Symmetric Nested Arrays and Their Application for Direction-of-Arrival Estimation of Near-Field Sources.

    PubMed

    Li, Shuang; Xie, Dongfeng

    2016-11-17

    In this paper, a new sensor array geometry, called a compressed symmetric nested array (CSNA), is designed to increase the degrees of freedom in the near field. As its name suggests, a CSNA is constructed by getting rid of some elements from two identical nested arrays. The closed form expressions are also presented for the sensor locations and the largest degrees of freedom obtainable as a function of the total number of sensors. Furthermore, a novel DOA estimation method is proposed by utilizing the CSNA in the near field. By employing this new array geometry, our method can identify more sources than sensors. Compared with other existing methods, the proposed method achieves higher resolution because of increased array aperture. Simulation results are demonstrated to verify the effectiveness of the proposed method.

  20. How much do direct livestock emissions actually contribute to global warming?

    PubMed

    Reisinger, Andy; Clark, Harry

    2018-04-01

    Agriculture directly contributes about 10%-12% of current global anthropogenic greenhouse gas emissions, mostly from livestock. However, such percentage estimates are based on global warming potentials (GWPs), which do not measure the actual warming caused by emissions and ignore the fact that methane does not accumulate in the atmosphere in the same way as CO 2 . Here, we employ a simple carbon cycle-climate model, historical estimates and future projections of livestock emissions to infer the fraction of actual warming that is attributable to direct livestock non-CO 2 emissions now and in future, and to CO 2 from pasture conversions, without relying on GWPs. We find that direct livestock non-CO 2 emissions caused about 19% of the total modelled warming of 0.81°C from all anthropogenic sources in 2010. CO 2 from pasture conversions contributed at least another 0.03°C, bringing the warming directly attributable to livestock to 23% of the total warming in 2010. The significance of direct livestock emissions to future warming depends strongly on global actions to reduce emissions from other sectors. Direct non-CO 2 livestock emissions would contribute only about 5% of the warming in 2100 if emissions from other sectors increase unabated, but could constitute as much as 18% (0.27°C) of the warming in 2100 if global CO 2 emissions from other sectors are reduced to near or below zero by 2100, consistent with the goal of limiting warming to well below 2°C. These estimates constitute a lower bound since indirect emissions linked to livestock feed production and supply chains were not included. Our estimates demonstrate that expanding the mitigation potential and realizing substantial reductions of direct livestock non-CO 2 emissions through demand and supply side measures can make an important contribution to achieve the stringent mitigation goals set out in the Paris Agreement, including by increasing the carbon budget consistent with the 1.5°C goal. © 2017 John Wiley & Sons Ltd.

  1. Migration of scattered teleseismic body waves

    NASA Astrophysics Data System (ADS)

    Bostock, M. G.; Rondenay, S.

    1999-06-01

    The retrieval of near-receiver mantle structure from scattered waves associated with teleseismic P and S and recorded on three-component, linear seismic arrays is considered in the context of inverse scattering theory. A Ray + Born formulation is proposed which admits linearization of the forward problem and economy in the computation of the elastic wave Green's function. The high-frequency approximation further simplifies the problem by enabling (1) the use of an earth-flattened, 1-D reference model, (2) a reduction in computations to 2-D through the assumption of 2.5-D experimental geometry, and (3) band-diagonalization of the Hessian matrix in the inverse formulation. The final expressions are in a form reminiscent of the classical diffraction stack of seismic migration. Implementation of this procedure demands an accurate estimate of the scattered wave contribution to the impulse response, and thus requires the removal of both the reference wavefield and the source time signature from the raw record sections. An approximate separation of direct and scattered waves is achieved through application of the inverse free-surface transfer operator to individual station records and a Karhunen-Loeve transform to the resulting record sections. This procedure takes the full displacement field to a wave vector space wherein the first principal component of the incident wave-type section is identified with the direct wave and is used as an estimate of the source time function. The scattered displacement field is reconstituted from the remaining principal components using the forward free-surface transfer operator, and may be reduced to a scattering impulse response upon deconvolution of the source estimate. An example employing pseudo-spectral synthetic seismograms demonstrates an application of the methodology.

  2. Effective Connectivity of Cortical Sensorimotor Networks During Finger Movement Tasks: A Simultaneous fNIRS, fMRI, EEG Study.

    PubMed

    Anwar, A R; Muthalib, M; Perrey, S; Galka, A; Granert, O; Wolff, S; Heute, U; Deuschl, G; Raethjen, J; Muthuraman, Muthuraman

    2016-09-01

    Recently, interest has been growing to understand the underlying dynamic directional relationship between simultaneously activated regions of the brain during motor task performance. Such directionality analysis (or effective connectivity analysis), based on non-invasive electrophysiological (electroencephalography-EEG) and hemodynamic (functional near infrared spectroscopy-fNIRS; and functional magnetic resonance imaging-fMRI) neuroimaging modalities can provide an estimate of the motor task-related information flow from one brain region to another. Since EEG, fNIRS and fMRI modalities achieve different spatial and temporal resolutions of motor-task related activation in the brain, the aim of this study was to determine the effective connectivity of cortico-cortical sensorimotor networks during finger movement tasks measured by each neuroimaging modality. Nine healthy subjects performed right hand finger movement tasks of different complexity (simple finger tapping-FT, simple finger sequence-SFS, and complex finger sequence-CFS). We focused our observations on three cortical regions of interest (ROIs), namely the contralateral sensorimotor cortex (SMC), the contralateral premotor cortex (PMC) and the contralateral dorsolateral prefrontal cortex (DLPFC). We estimated the effective connectivity between these ROIs using conditional Granger causality (GC) analysis determined from the time series signals measured by fMRI (blood oxygenation level-dependent-BOLD), fNIRS (oxygenated-O2Hb and deoxygenated-HHb hemoglobin), and EEG (scalp and source level analysis) neuroimaging modalities. The effective connectivity analysis showed significant bi-directional information flow between the SMC, PMC, and DLPFC as determined by the EEG (scalp and source), fMRI (BOLD) and fNIRS (O2Hb and HHb) modalities for all three motor tasks. However the source level EEG GC values were significantly greater than the other modalities. In addition, only the source level EEG showed a significantly greater forward than backward information flow between the ROIs. This simultaneous fMRI, fNIRS and EEG study has shown through independent GC analysis of the respective time series that a bi-directional effective connectivity occurs within a cortico-cortical sensorimotor network (SMC, PMC and DLPFC) during finger movement tasks.

  3. The social cost of rheumatoid arthritis in Italy: the results of an estimation exercise.

    PubMed

    Turchetti, G; Bellelli, S; Mosca, M

    2014-03-14

    The objective of this study is to estimate the mean annual social cost per adult person and the total social cost of rheumatoid arthritis (RA) in Italy. A literature review was performed by searching primary economic studies on adults in order to collect cost data of RA in Italy in the last decade. The review results were merged with data of institutional sources for estimating - following the methodological steps of the cost of illness analysis - the social cost of RA in Italy. The mean annual social cost of RA was € 13,595 per adult patient in Italy. Affecting 259,795 persons, RA determines a social cost of € 3.5 billions in Italy. Non-medical direct cost and indirect cost represent the main cost items (48% and 31%) of the total social cost of RA in Italy. Based on these results, it appears evident that the assessment of the economic burden of RA solely based on direct medical costs evaluation gives a limited view of the phenomenon.

  4. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  5. Partial differential equation-based localization of a monopole source from a circular array.

    PubMed

    Ando, Shigeru; Nara, Takaaki; Levy, Tsukassa

    2013-10-01

    Wave source localization from a sensor array has long been the most active research topics in both theory and application. In this paper, an explicit and time-domain inversion method for the direction and distance of a monopole source from a circular array is proposed. The approach is based on a mathematical technique, the weighted integral method, for signal/source parameter estimation. It begins with an exact form of the source-constraint partial differential equation that describes the unilateral propagation of wide-band waves from a single source, and leads to exact algebraic equations that include circular Fourier coefficients (phase mode measurements) as their coefficients. From them, nearly closed-form, single-shot and multishot algorithms are obtained that is suitable for use with band-pass/differential filter banks. Numerical evaluation and several experimental results obtained using a 16-element circular microphone array are presented to verify the validity of the proposed method.

  6. Modeling measured glottal volume velocity waveforms.

    PubMed

    Verneuil, Andrew; Berry, David A; Kreiman, Jody; Gerratt, Bruce R; Ye, Ming; Berke, Gerald S

    2003-02-01

    The source-filter theory of speech production describes a glottal energy source (volume velocity waveform) that is filtered by the vocal tract and radiates from the mouth as phonation. The characteristics of the volume velocity waveform, the source that drives phonation, have been estimated, but never directly measured at the glottis. To accomplish this measurement, constant temperature anemometer probes were used in an in vivo canine constant pressure model of phonation. A 3-probe array was positioned supraglottically, and an endoscopic camera was positioned subglottically. Simultaneous recordings of airflow velocity (using anemometry) and glottal area (using stroboscopy) were made in 3 animals. Glottal airflow velocities and areas were combined to produce direct measurements of glottal volume velocity waveforms. The anterior and middle parts of the glottis contributed significantly to the volume velocity waveform, with less contribution from the posterior part of the glottis. The measured volume velocity waveforms were successfully fitted to a well-known laryngeal airflow model. A noninvasive measured volume velocity waveform holds promise for future clinical use.

  7. Interferometric Laser Scanner for Direction Determination

    PubMed Central

    Kaloshin, Gennady; Lukin, Igor

    2016-01-01

    In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5–10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km. PMID:26805841

  8. Interferometric Laser Scanner for Direction Determination.

    PubMed

    Kaloshin, Gennady; Lukin, Igor

    2016-01-21

    In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5-10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km.

  9. Mark 3 VLBI system: Tropospheric calibration subsystems

    NASA Technical Reports Server (NTRS)

    Resch, G. M.

    1980-01-01

    Tropospheric delay calibrations are implemented in the Mark 3 system with two subsystems. Estimates of the dry component of tropospheric delay are provided by accurate barometric data from a subsystem of surface meteorological sensors (SMS). An estimate of the wet component of tropospheric delay is provided by a water vapor radiometer (WVR). Both subsystems interface directly to the ASCII Transceiver bus of the Mark 3 system and are operated by the control computer. Seven WVR's under construction are designed to operate in proximity to a radio telescope and can be commanded to point along the line-of-sight to a radio source. They should provide a delay estimate that is accurate to the + or - 2 cm level.

  10. Conservative classical and quantum resolution limits for incoherent imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-06-01

    I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.

  11. Applications of Bayesian spectrum representation in acoustics

    NASA Astrophysics Data System (ADS)

    Botts, Jonathan M.

    This dissertation utilizes a Bayesian inference framework to enhance the solution of inverse problems where the forward model maps to acoustic spectra. A Bayesian solution to filter design inverts a acoustic spectra to pole-zero locations of a discrete-time filter model. Spatial sound field analysis with a spherical microphone array is a data analysis problem that requires inversion of spatio-temporal spectra to directions of arrival. As with many inverse problems, a probabilistic analysis results in richer solutions than can be achieved with ad-hoc methods. In the filter design problem, the Bayesian inversion results in globally optimal coefficient estimates as well as an estimate the most concise filter capable of representing the given spectrum, within a single framework. This approach is demonstrated on synthetic spectra, head-related transfer function spectra, and measured acoustic reflection spectra. The Bayesian model-based analysis of spatial room impulse responses is presented as an analogous problem with equally rich solution. The model selection mechanism provides an estimate of the number of arrivals, which is necessary to properly infer the directions of simultaneous arrivals. Although, spectrum inversion problems are fairly ubiquitous, the scope of this dissertation has been limited to these two and derivative problems. The Bayesian approach to filter design is demonstrated on an artificial spectrum to illustrate the model comparison mechanism and then on measured head-related transfer functions to show the potential range of application. Coupled with sampling methods, the Bayesian approach is shown to outperform least-squares filter design methods commonly used in commercial software, confirming the need for a global search of the parameter space. The resulting designs are shown to be comparable to those that result from global optimization methods, but the Bayesian approach has the added advantage of a filter length estimate within the same unified framework. The application to reflection data is useful for representing frequency-dependent impedance boundaries in finite difference acoustic simulations. Furthermore, since the filter transfer function is a parametric model, it can be modified to incorporate arbitrary frequency weighting and account for the band-limited nature of measured reflection spectra. Finally, the model is modified to compensate for dispersive error in the finite difference simulation, from the filter design process. Stemming from the filter boundary problem, the implementation of pressure sources in finite difference simulation is addressed in order to assure that schemes properly converge. A class of parameterized source functions is proposed and shown to offer straightforward control of residual error in the simulation. Guided by the notion that the solution to be approximated affects the approximation error, sources are designed which reduce residual dispersive error to the size of round-off errors. The early part of a room impulse response can be characterized by a series of isolated plane waves. Measured with an array of microphones, plane waves map to a directional response of the array or spatial intensity map. Probabilistic inversion of this response results in estimates of the number and directions of image source arrivals. The model-based inversion is shown to avoid ambiguities associated with peak-finding or inspection of the spatial intensity map. For this problem, determining the number of arrivals in a given frame is critical for properly inferring the state of the sound field. This analysis is effectively compression of the spatial room response, which is useful for analysis or encoding of the spatial sound field. Parametric, model-based formulations of these problems enhance the solution in all cases, and a Bayesian interpretation provides a principled approach to model comparison and parameter estimation. v

  12. Optimized spectroscopic scheme for enhanced precision CO measurements with applications to urban source attribution

    NASA Astrophysics Data System (ADS)

    Nottrott, A.; Hoffnagle, J.; Farinas, A.; Rella, C.

    2014-12-01

    Carbon monoxide (CO) is an urban pollutant generated by internal combustion engines which contributes to the formation of ground level ozone (smog). CO is also an excellent tracer for emissions from mobile combustion sources. In this work we present an optimized spectroscopic sampling scheme that enables enhanced precision CO measurements. The scheme was implemented on the Picarro G2401 Cavity Ring-Down Spectroscopy (CRDS) analyzer which measures CO2, CO, CH4 and H2O at 0.2 Hz. The optimized scheme improved the raw precision of CO measurements by 40% from 5 ppb to 3 ppb. Correlations of measured CO2, CO, CH4 and H2O from an urban tower were partitioned by wind direction and combined with a concentration footprint model for source attribution. The application of a concentration footprint for source attribution has several advantages. The upwind extent of the concentration footprint for a given sensor is much larger than the flux footprint. Measurements of mean concentration at the sensor location can be used to estimate source strength from a concentration footprint, while measurements of the vertical concentration flux are necessary to determine source strength from the flux footprint. Direct measurement of vertical concentration flux requires high frequency temporal sampling and increases the cost and complexity of the measurement system.

  13. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    PubMed

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  14. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  15. Dose estimation to eye lens of industrial gamma radiography workers using the Monte Carlo method.

    PubMed

    de Lima, Alexandre Roza; Hunt, John Graham; Da Silva, Francisco Cesar Augusto

    2017-12-01

    The ICRP Statement on Tissue Reactions (2011), based on epidemiological evidence, recommended a reduction for the eye lens equivalent dose limit from 150 to 20 mSv per year. This paper presents mainly the dose estimations received by industrial gamma radiography workers, during planned or accidental exposure to the eye lens, Hp(10) and effective dose. A Brazilian Visual Monte Carlo Dose Calculation program was used and two relevant scenarios were considered. For the planned exposure situation, twelve radiographic exposures per day for 250 days per year, which leads to a direct exposure of 10 h per year, were considered. The simulation was carried out using a 192 Ir source with 1.0 TBq of activity; a source/operator distance between 5 and 10 m and placed at heights of 0.02 m, 1 m and 2 m, and an exposure time of 12 s. Using a standard height of 1 m, the eye lens doses were estimated as being between 16.3 and 60.3 mGy per year. For the accidental exposure situation, the same radionuclide and activity were used, but in this case the doses were calculated with and without a collimator. The heights above ground considered were 1.0 m, 1.5 m and 2.0 m; the source/operator distance was 40 cm, and the exposure time 74 s. The eye lens doses at 1.5 m were 12.3 and 0.28 mGy without and with a collimator, respectively. The conclusions were that: (1) the estimated doses show that the 20 mSv annual limit for eye lens equivalent dose can directly impact industrial gamma radiography activities, mainly in industries with high number of radiographic exposures per year; (2) the risk of lens opacity has a low probability for a single accident, but depending on the number of accidental exposures and the dose levels found in planned exposures, the threshold dose can easily be exceeded during the professional career of an industrial radiography operator, and; (3) in a first approximation, Hp(10) can be used to estimate the equivalent dose to the eye lens.

  16. U.S. Navy Global and Regional Wave Modeling

    DTIC Science & Technology

    2014-09-01

    for par- allel processing (Wittmann, 2002), the open source policy of WW3, and WW3’ s accurate...burden estimate or any other aspect of this collection of information, including suggestions for reducing the burden, to the Department of Defense ...September 2014 57 which determine the fetch and duration available for generation of wave energy, and (2) the direction and distance of

  17. The Impact of Importation of Grant and Research Money on a State Economy

    ERIC Educational Resources Information Center

    Lillis, Charles M.; Tonkovich, David

    1976-01-01

    The central issue is that to the extent these funds are from sources external to the state, they represent a pure economic stimulus to both the university and the state and a direct savings for the state's higher education commitment. The estimated magnitude of this stimulus in Washington is examined by using multipliers derived from the state…

  18. Forest fuel characterization using direct sampling in forest plantations

    Treesearch

    Eva Reyna Esmeralda Díaz García; Marco Aurelio González Tagle; Javier Jiménez Pérez; Eduardo JavierTreviño Garza; Diana Yemilet Ávila Flores

    2013-01-01

    One of the essential elements for a fire to occur is the flammable material. This is defined as the total biomass that has the ability to ignite and burn when exposed to a heat source. Fuel characterization in Mexican forest ecosystems is very scarce. However, this information is very important for estimating flammability and forest fire risk, fire behavior,...

  19. Analysis of Lithospheric Stresses Using Satellite Gravimetry: Hypotheses and Applications to North Atlantic

    NASA Astrophysics Data System (ADS)

    Minakov, A.; Medvedev, S.

    2017-12-01

    Analysis of lithospheric stresses is necessary to gain understanding of the forces that drive plate tectonics and intraplate deformations and the structure and strength of the lithosphere. A major source of lithospheric stresses is believed to be in variations of surface topography and lithospheric density. The traditional approach to stress estimation is based on direct calculations of the Gravitational Potential Energy (GPE), the depth integrated density moment of the lithosphere column. GPE is highly sensitive to density structure which, however, is often poorly constrained. Density structure of the lithosphere may be refined using methods of gravity modeling. However, the resulted density models suffer from non-uniqueness of the inverse problem. An alternative approach is to directly estimate lithospheric stresses (depth integrated) from satellite gravimetry data. Satellite gravity gradient measurements by the ESA GOCE mission ensures a wealth of data for mapping lithospheric stresses if a link between data and stresses or GPE can be established theoretically. The non-uniqueness of interpretation of sources of the gravity signal holds in this case as well. Therefore, the data analysis was tested for the North Atlantic region where reliable additional constraints are supplied by both controlled-source and earthquake seismology. The study involves comparison of three methods of stress modeling: (1) the traditional modeling approach using a thin sheet approximation; (2) the filtered geoid approach; and (3) the direct utilization of the gravity gradient tensor. Whereas the first two approaches (1)-(2) calculate GPE and utilize a computationally expensive finite element mechanical modeling to calculate stresses, the approach (3) uses a much simpler numerical treatment but requires simplifying assumptions that yet to be tested. The modeled orientation of principal stresses and stress magnitudes by each of the three methods are compared with the World Stress Map.

  20. Resolving multiple propagation paths in time of flight range cameras using direct and global separation methods

    NASA Astrophysics Data System (ADS)

    Whyte, Refael; Streeter, Lee; Cree, Michael J.; Dorrington, Adrian A.

    2015-11-01

    Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.

  1. Direct solar pumping of semiconductor lasers: A feasibility study

    NASA Technical Reports Server (NTRS)

    Anderson, Neal G.

    1991-01-01

    The primary goals of the feasibility study are the following: (1) to provide a preliminary assessment of the feasibility of pumping semiconductor lasers in space directly focused sunlight; and (2) to identify semiconductor laser structures expected to operate at the lowest possible focusing intensities. It should be emphasized that the structures under consideration would provide direct optical-to-optical conversion of sunlight into laser light in a single crystal, in contrast to a configuration consisting of a solar cell or battery electrically pumping a current injection laser. With external modulation, such lasers may prove to be efficient sources for intersatellite communications. We proposed to develop a theoretical model of semiconductor quantum-well lasers photopumped by a broadband source, test it against existing experimental data where possible, and apply it to estimating solar pumping requirements and identifying optimum structures for operation for operation at low pump intensities. This report outlines our progress toward these goals. Discussion of several technical details are left to the attached summary abstract.

  2. The Acceleration of the Barycenter of Solar System Obtained from VLBI Observations and Its Impact on the ICRS

    NASA Astrophysics Data System (ADS)

    Xu, M. H.

    2016-03-01

    Since 1998 January 1, instead of the traditional stellar reference system, the International Celestial Reference System (ICRS) has been realized by an ensemble of extragalactic radio sources that are located at hundreds of millions of light years away (if we accept their cosmological distances), so that the reference frame realized by extragalactic radio sources is assumed to be space-fixed. The acceleration of the barycenter of solar system (SSB), which is the origin of the ICRS, gives rise to a systematical variation in the directions of the observed radio sources. This phenomenon is called the secular aberration drift. As a result, the extragalactic reference frame fixed to the space provides a reference standard for detecting the secular aberration drift, and the acceleration of the barycenter with respect to the space can be determined from the observations of extragalactic radio sources. In this thesis, we aim to determine the acceleration of the SSB from astrometric and geodetic observations obtained by Very Long Baseline Interferometry (VLBI), which is a technique using the telescopes globally distributed on the Earth to observe a radio source simultaneously, and with the capacity of angular positioning for compact radio sources at 10-milliarcsecond level. The method of the global solution, which allows the acceleration vector to be estimated as a global parameter in the data analysis, is developed. Through the formal error given by the solution, this method shows directly the VLBI observations' capability to constrain the acceleration of the SSB, and demonstrates the significance level of the result. In the next step, the impact of the acceleration on the ICRS is studied in order to obtain the correction of the celestial reference frame (CRF) orientation. This thesis begins with the basic background and the general frame of this work. A brief review of the realization of the CRF based on the kinematical and the dynamical methods is presented in Chapter 2, along with the definition of the CRF and its relationship with the inertial reference frame. Chapter 3 is divided into two parts. The first part describes various effects that modify the geometric direction of an object, especially the parallax, the aberration, and the proper motion. Then the derivative model and the principle of determination of the acceleration are introduced in the second part. The VLBI data analysis method, including VLBI data reduction (solving the ambiguity, identifying the clock break, and determining the ionospheric effect), theoretical delay model, parameterization, and datum definition, is discussed in detail in Chapter 4. The estimation of the acceleration by more than 30-year VLBI observations and the results are then described in Chapter 5. The evaluation and the robust check of our results by different solutions and the comparison to that from another research group are performed. The error sources for the estimation of the acceleration, such as the secular parallax caused by the velocity of the barycenter in space, are quantitatively studied by simulation and data analysis in Chapter 6. The two main impacts of the acceleration on the CRF, the apparent proper motion with the magnitude of the μ as\\cdot yr^{-1} level and the global rotation in the CRF due to the un-uniformed distribution of radio sources on the sky, are discussed in Chapter 7. The definition and the realization of the epoch CRF are presented as well. The future work concerning the explanation of the estimated acceleration and potential research on several main problems in modern astrometry are discussed in the last chapter.

  3. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.

    2018-07-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak-lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni, but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the Cosmic Evolution Survey (COSMOS) field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ_{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z procedure instead using the Directional Neighbourhood Fitting algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  4. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; Rau, M. M.; De Vicente, J.; Hartley, W. G.; Gaztanaga, E.; DeRose, J.; Troxel, M. A.; Davis, C.; Alarcon, A.; MacCrann, N.; Prat, J.; Sánchez, C.; Sheldon, E.; Wechsler, R. H.; Asorey, J.; Becker, M. R.; Bonnett, C.; Carnero Rosell, A.; Carollo, D.; Carrasco Kind, M.; Castander, F. J.; Cawthon, R.; Chang, C.; Childress, M.; Davis, T. M.; Drlica-Wagner, A.; Gatti, M.; Glazebrook, K.; Gschwend, J.; Hinton, S. R.; Hoormann, J. K.; Kim, A. G.; King, A.; Kuehn, K.; Lewis, G.; Lidman, C.; Lin, H.; Macaulay, E.; Maia, M. A. G.; Martini, P.; Mudd, D.; Möller, A.; Nichol, R. C.; Ogando, R. L. C.; Rollins, R. P.; Roodman, A.; Ross, A. J.; Rozo, E.; Rykoff, E. S.; Samuroff, S.; Sevilla-Noarbe, I.; Sharp, R.; Sommer, N. E.; Tucker, B. E.; Uddin, S. A.; Varga, T. N.; Vielzeuf, P.; Yuan, F.; Zhang, B.; Abbott, T. M. C.; Abdalla, F. B.; Allam, S.; Annis, J.; Bechtol, K.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Busha, M. T.; Capozzi, D.; Carretero, J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Giannantonio, T.; Goldstein, D. A.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; James, D. J.; Jarvis, M.; Jeltema, T.; Johnson, M. W. G.; Johnson, M. D.; Kirk, D.; Krause, E.; Kuhlmann, S.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Lima, M.; March, M.; Marshall, J. L.; Melchior, P.; Menanteau, F.; Miquel, R.; Nord, B.; O'Neill, C. R.; Plazas, A. A.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Smith, M.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Tucker, D. L.; Vikram, V.; Walker, A. R.; Weller, J.; Wester, W.; Wolf, R. C.; Yanny, B.; Zuntz, J.; DES Collaboration

    2018-04-01

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the populations of galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z ≈ 0.2 and ≈1.3, and to produce initial estimates of the lensing-weighted redshift distributions n^i_PZ(z)∝ dn^i/dz for members of bin i. Accurate determination of cosmological parameters depends critically on knowledge of ni but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts n^i(z)=n^i_PZ(z-Δ z^i) to correct the mean redshift of ni(z) for biases in n^i_PZ. The Δzi are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the Δzi of the three lowest redshift bins are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15 < z < 0.9. This paper details the BPZ and COSMOS procedures, and demonstrates that the cosmological inference is insensitive to details of the ni(z) beyond the choice of Δzi. The clustering and COSMOS validation methods produce consistent estimates of Δzi in the bins where both can be applied, with combined uncertainties of σ _{Δ z^i}=0.015, 0.013, 0.011, and 0.022 in the four bins. Repeating the photo-z proceedure instead using the Directional Neighborhood Fitting (DNF) algorithm, or using the ni(z) estimated from the matched sample in COSMOS, yields no discernible difference in cosmological inferences.

  5. Characterisation of plastic microbeads in facial scrubs and their estimated emissions in Mainland China.

    PubMed

    Cheung, Pui Kwan; Fok, Lincoln

    2017-10-01

    Plastic microbeads are often added to personal care and cosmetic products (PCCPs) as an abrasive agent in exfoliants. These beads have been reported to contaminate the aquatic environment and are sufficiently small to be readily ingested by aquatic organisms. Plastic microbeads can be directly released into the aquatic environment with domestic sewage if no sewage treatment is provided, and they can also escape from wastewater treatment plants (WWTPs) because of incomplete removal. However, the emissions of microbeads from these two sources have never been estimated for China, and no regulation has been imposed on the use of plastic microbeads in PCCPs. Therefore, in this study, we aimed to estimate the annual microbead emissions in Mainland China from both direct emissions and WWTP emissions. Nine facial scrubs were purchased, and the microbeads in the scrubs were extracted and enumerated. The microbead density in those products ranged from 5219 to 50,391 particles/g, with an average of 20,860 particles/g. Direct emissions arising from the use of facial scrubs were estimated using this average density number, population data, facial scrub usage rate, sewage treatment rate, and a few conservative assumptions. WWTP emissions were calculated by multiplying the annual treated sewage volume and estimated microbead density in treated sewage. We estimated that, on average, 209.7 trillion microbeads (306.9 tonnes) are emitted into the aquatic environment in Mainland China every year. More than 80% of the emissions originate from incomplete removal in WWTPs, and the remaining 20% are derived from direct emissions. Although the weight of the emitted microbeads only accounts for approximately 0.03% of the plastic waste input into the ocean from China, the number of microbeads emitted far exceeds the previous estimate of plastic debris (>330 μm) on the world's sea surface. Immediate actions are required to prevent plastic microbeads from entering the aquatic environment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. A reassessment of ground water flow conditions and specific yield at Borden and Cape Cod

    USGS Publications Warehouse

    Grimestad, Garry

    2002-01-01

    Recent widely accepted findings respecting the origin and nature of specific yield in unconfined aquifers rely heavily on water level changes observed during two pumping tests, one conducted at Borden, Ontario, Canada, and the other at Cape Cod, Massachusetts. The drawdown patterns observed during those tests have been taken as proof that unconfined specific yield estimates obtained from long-duration pumping tests should approach the laboratory-estimated effective porosity of representative aquifer formation samples. However, both of the original test reports included direct or referential descriptions of potential supplemental sources of pumped water that would have introduced intractable complications and errors into straightforward interpretations of the drawdown observations if actually present. Searches for evidence of previously neglected sources were performed by screening the original drawdown observations from both locations for signs of diagnostic skewing that should be present only if some of the extracted water was derived from sources other than main aquifer storage. The data screening was performed using error-guided computer assisted fitting techniques, capable of accurately sensing and simulating the effects of a wide range of non-traditional and external sources. The drawdown curves from both tests proved to be inconsistent with traditional single-source pumped aquifer models but consistent with site-specific alternatives that included significant contributions of water from external sources. The corrected pumping responses shared several important features. Unsaturated drainage appears to have ceased effectively at both locations within the first day of pumping, and estimates of specific yield stabilized at levels considerably smaller than the corresponding laboratory-measured or probable effective porosity. Separate sequential analyses of progressively later field observations gave stable and nearly constant specific yield estimates for each location, with no evidence from either test that more prolonged pumping would have induced substantially greater levels of unconfined specific yield.

  7. Estimating Source Duration for Moderate and Large Earthquakes in Taiwan

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Yen; Hwang, Ruey-Der; Ho, Chien-Yin; Lin, Tzu-Wei

    2017-04-01

    Estimating Source Duration for Moderate and Large Earthquakes in Taiwan Wen-Yen Chang1, Ruey-Der Hwang2, Chien-Yin Ho3 and Tzu-Wei Lin4 1 Department of Natural Resources and Environmental Studies, National Dong Hwa University, Hualien, Taiwan, ROC 2Department of Geology, Chinese Culture University, Taipei, Taiwan, ROC 3Department of Earth Sciences, National Cheng Kung University, Tainan, Taiwan, ROC 4Seismology Center, Central Weather Bureau, Taipei, Taiwan, ROC ABSTRACT To construct a relationship between seismic moment (M0) and source duration (t) was important for seismic hazard in Taiwan, where earthquakes were quite active. In this study, we used a proposed inversion process using teleseismic P-waves to derive the M0-t relationship in the Taiwan region for the first time. Fifteen earthquakes with MW 5.5-7.1 and focal depths of less than 40 km were adopted. The inversion process could simultaneously determine source duration, focal depth, and pseudo radiation patterns of direct P-wave and two depth phases, by which M0 and fault plane solutions were estimated. Results showed that the estimated t ranging from 2.7 to 24.9 sec varied with one-third power of M0. That is, M0 is proportional to t**3, and then the relationship between both of them was M0=0.76*10**23(t)**3 , where M0 in dyne-cm and t in second. The M0-t relationship derived from this study was very close to those determined from global moderate to large earthquakes. For further understanding the validity in the derived relationship, through the constructed relationship of M0-, we inferred the source duration of the 1999 Chi-Chi (Taiwan) earthquake with M0=2-5*10**27 dyne-cm (corresponding to Mw = 7.5-7.7) to be approximately 29-40 sec, in agreement with many previous studies for source duration (28-42 sec).

  8. Noise and analyzer-crystal angular position analysis for analyzer-based phase-contrast imaging

    NASA Astrophysics Data System (ADS)

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-04-01

    The analyzer-based phase-contrast x-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile of the x-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this paper is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the multiple-image radiography, diffraction enhanced imaging and scatter diffraction enhanced imaging estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique.

  9. Noise and Analyzer-Crystal Angular Position Analysis for Analyzer-Based Phase-Contrast Imaging

    PubMed Central

    Majidi, Keivan; Li, Jun; Muehleman, Carol; Brankov, Jovan G.

    2014-01-01

    The analyzer-based phase-contrast X-ray imaging (ABI) method is emerging as a potential alternative to conventional radiography. Like many of the modern imaging techniques, ABI is a computed imaging method (meaning that images are calculated from raw data). ABI can simultaneously generate a number of planar parametric images containing information about absorption, refraction, and scattering properties of an object. These images are estimated from raw data acquired by measuring (sampling) the angular intensity profile (AIP) of the X-ray beam passed through the object at different angular positions of the analyzer crystal. The noise in the estimated ABI parametric images depends upon imaging conditions like the source intensity (flux), measurements angular positions, object properties, and the estimation method. In this paper, we use the Cramér-Rao lower bound (CRLB) to quantify the noise properties in parametric images and to investigate the effect of source intensity, different analyzer-crystal angular positions and object properties on this bound, assuming a fixed radiation dose delivered to an object. The CRLB is the minimum bound for the variance of an unbiased estimator and defines the best noise performance that one can obtain regardless of which estimation method is used to estimate ABI parametric images. The main result of this manuscript is that the variance (hence the noise) in parametric images is directly proportional to the source intensity and only a limited number of analyzer-crystal angular measurements (eleven for uniform and three for optimal non-uniform) are required to get the best parametric images. The following angular measurements only spread the total dose to the measurements without improving or worsening CRLB, but the added measurements may improve parametric images by reducing estimation bias. Next, using CRLB we evaluate the Multiple-Image Radiography (MIR), Diffraction Enhanced Imaging (DEI) and Scatter Diffraction Enhanced Imaging (S-DEI) estimation techniques, though the proposed methodology can be used to evaluate any other ABI parametric image estimation technique. PMID:24651402

  10. Global Economic Impact of Dental Diseases.

    PubMed

    Listl, S; Galloway, J; Mossey, P A; Marcenes, W

    2015-10-01

    Reporting the economic burden of oral diseases is important to evaluate the societal relevance of preventing and addressing oral diseases. In addition to treatment costs, there are indirect costs to consider, mainly in terms of productivity losses due to absenteeism from work. The purpose of the present study was to estimate the direct and indirect costs of dental diseases worldwide to approximate the global economic impact. Estimation of direct treatment costs was based on a systematic approach. For estimation of indirect costs, an approach suggested by the World Health Organization's Commission on Macroeconomics and Health was employed, which factored in 2010 values of gross domestic product per capita as provided by the International Monetary Fund and oral burden of disease estimates from the 2010 Global Burden of Disease Study. Direct treatment costs due to dental diseases worldwide were estimated at US$298 billion yearly, corresponding to an average of 4.6% of global health expenditure. Indirect costs due to dental diseases worldwide amounted to US$144 billion yearly, corresponding to economic losses within the range of the 10 most frequent global causes of death. Within the limitations of currently available data sources and methodologies, these findings suggest that the global economic impact of dental diseases amounted to US$442 billion in 2010. Improvements in population oral health may imply substantial economic benefits not only in terms of reduced treatment costs but also because of fewer productivity losses in the labor market. © International & American Associations for Dental Research 2015.

  11. Estimating State-Specific Contributions to PM2.5- and O3-Related Health Burden from Residential Combustion and Electricity Generating Unit Emissions in the United States

    PubMed Central

    Penn, Stefani L.; Arunachalam, Saravanan; Woody, Matthew; Heiger-Bernays, Wendy; Tripodis, Yorghos; Levy, Jonathan I.

    2016-01-01

    Background: Residential combustion (RC) and electricity generating unit (EGU) emissions adversely impact air quality and human health by increasing ambient concentrations of fine particulate matter (PM2.5) and ozone (O3). Studies to date have not isolated contributing emissions by state of origin (source-state), which is necessary for policy makers to determine efficient strategies to decrease health impacts. Objectives: In this study, we aimed to estimate health impacts (premature mortalities) attributable to PM2.5 and O3 from RC and EGU emissions by precursor species, source sector, and source-state in the continental United States for 2005. Methods: We used the Community Multiscale Air Quality model employing the decoupled direct method to quantify changes in air quality and epidemiological evidence to determine concentration–response functions to calculate associated health impacts. Results: We estimated 21,000 premature mortalities per year from EGU emissions, driven by sulfur dioxide emissions forming PM2.5. More than half of EGU health impacts are attributable to emissions from eight states with significant coal combustion and large downwind populations. We estimate 10,000 premature mortalities per year from RC emissions, driven by primary PM2.5 emissions. States with large populations and significant residential wood combustion dominate RC health impacts. Annual mortality risk per thousand tons of precursor emissions (health damage functions) varied significantly across source-states for both source sectors and all precursor pollutants. Conclusions: Our findings reinforce the importance of pollutant-specific, location-specific, and source-specific models of health impacts in design of health-risk minimizing emissions control policies. Citation: Penn SL, Arunachalam S, Woody M, Heiger-Bernays W, Tripodis Y, Levy JI. 2017. Estimating state-specific contributions to PM2.5- and O3-related health burden from residential combustion and electricity generating unit emissions in the United States. Environ Health Perspect 125:324–332; http://dx.doi.org/10.1289/EHP550 PMID:27586513

  12. Far-field DOA estimation and source localization for different scenarios in a distributed sensor network

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz

    Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.

  13. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.

  14. Report summary Prevalence and monetary costs of dementia in Canada (2016): a report by the Alzheimer Society of Canada.

    PubMed

    2016-10-01

    Dementia prevalence estimates vary among population-based studies, depending on the definitions of dementia, methodologies and data sources and types of costs they use. A common approach is needed to avoid confusion and increase public and stakeholder confidence in the estimates. Since 1994, five major studies have yielded widely differing estimates of dementia prevalence and monetary costs of dementia in Canada. These studies variously estimated the prevalence of dementia for the year 2011 as low as 340 170 and as high as 747 000. The main reason for this difference was that mild cognitive impairment (MCI) was not consistently included in the projections. The estimated monetary costs of dementia for the same year also varied, from $910 million to $33 billion. This discrepancy is largely due to three factors: (1) the lack of agreed-upon methods for estimating financial costs; (2) the unavailability of prevalence estimates for the various stages of dementia (mild, moderate and severe), which directly affect the amount of money spent; and (3) the absence of tools to measure direct, indirect and intangible costs more accurately. Given the increasing challenges of dementia in Canada and around the globe, reconciling these differences is critical for developing standards to generate reliable information for public consumption and to shape public policy and service development.

  15. Sources of methane and nitrous oxide in California's Central Valley estimated through direct airborne flux and positive matrix factorization source apportionment of groundbased and regional tall tower measurements

    NASA Astrophysics Data System (ADS)

    Guha, Abhinav

    Methane (CH4) and nitrous oxide (N2O) are two major greenhouse gases that contribute significantly to the increase in anthropogenic radiative-forcing causing perturbations to the earth's climate system. In a watershed moment in the state's history of environmental leadership and commitment, California, in 2006, opted for sharp reductions in their greenhouse gas (GHG) emissions and adopted a long-term approach to address climate change that includes regulation of emissions from individual emitters and source categories. There are large CH4 and N2O emissions sources in the state, predominantly in the agricultural and waste management sector. While these two gases account for < 10% of total annual greenhouse gas emissions of the state, large uncertainties exist in their `bottom-up' accounting in the state GHG inventory. Additionally, an increasing number of `top-down' studies based on ambient observations point towards underestimation of their emissions in the inventory. Three intensive field observation campaigns that were spatially and temporally diverse took place between 2010 and 2013 in the Central Valley of California where the largest known sources of CH4 and N2O (e.g. agricultural systems and dairies) and potentially significant CH4 sources (e.g. oil and gas extraction) are located. The CalNex (California Nexus - Research at the Nexus of Air Quality and Climate Change) field campaign during summer 2010 (May 15 - June 30) took place in the urban core of Bakersfield in the southern San Joaquin Valley, a city whose economy is built around agriculture and the oil and gas industry. During summer of 2011, airborne measurements were performed over a large spatial domain, all across and around the Central Valley as part of the CABERNET (California Airborne BVOC Emission Research in Natural Ecosystem Transects) study. Next, a one-year continuous field campaign (WGC 2012-13, June 2012 - August 2013) was conducted at the Walnut Grove tall tower near the Sacramento-San Joaquin River Delta in the Central Valley. Through analysis of these field measurements, this dissertation presents the apportionment of observed CH4 and N2O concentration enhancements into major source categories along with direct emissions estimates from airborne observations. We perform high-precision measurements of greenhouse gases using gas analyzers based on absorption spectroscopy, and other source marker volatile organic compounds (VOCs) using state of the art VOC measurement systems (e.g. proton transfer reaction mass spectrometry). We combine these measurements with a statistical source apportionment technique called positive matrix factorization (PMF) to evaluate and investigate the major local sources of CH4 and N2O during CalNex and Walnut Grove campaigns. In the CABERNET study, we combine measurements with an airborne approach to a well-established micrometeorological technique (eddy-covariance method) to derive CH4 fluxes over different source regions in the Central Valley. In the CalNex experiments, we demonstrate that dairy and livestock remains the largest source sector of non-CO2 greenhouse gases in the San Joaquin Valley contributing most of the CH4 and much of the measured N2O at Bakersfield. Agriculture is observed to provide another major source of N2O, while vehicle emissions are found to be an insignificant source of N2O, contrary to the current statewide greenhouse gas inventory which includes vehicles as a major source. Our PMF source apportionment also produces an evaporative/fugitive factor but its relative lack of CH4 contributions points to removal processes from vented emissions in the surrounding O&G industry and the overwhelming dominance of the dairy CH4 source. In the CABERNET experiments, we report enhancements of CH4 from a number of sources spread across the spatial domain of the Central Valley that improves our understanding of their distribution and relative strengths. We observe large enhancements of CH4 mixing ratios over the dairy and feedlot intensive regions of Central Valley corresponding with significant flux estimates that are larger than CH4 emission rates reported in the greenhouse gas inventory. We find evidence of significant CH 4 emissions from fugitive and/or vented sources and cogeneration plants in the oil and gas fields of Kern County, all of which are minor to insignificant CH4 sources in the current greenhouse gas inventory. The CABERNET campaign represents the first successful implementation of airborne eddy covariance technique for CH4 flux measurements. At Walnut Grove, we demonstrate the seasonal and temporal dependence of CH4 and N2O sources in the Central Valley. Applying PMF analysis on seasonal GHG-VOC data sets, we again identify dairies and livestock as the dominant source of CH4. A clear temporal dependence of emissions originating from a wetlands / Delta CH4 source is observed while CH4 contributions are also observed from a source originating from upwind urban and natural gas extraction activities. The agricultural soil management source of N2O has a seasonal dependence coincident with the agricultural growing season (and hence, fertilizer use) accounting for a majority of the N2O enhancements during spring and summers but being reduced to a negligible source during late fall and winters when manure management N2O emissions from dairy and livestock dominate the relative distribution. N2O is absent from the 'urban' source, in contrast to the significant contribution to the statewide N2O inventory from vehicle emissions. The application of greenhouse gas source apportionment using VOC tracers as identification tools at two independent sites in the Central Valley over vastly different temporal resolutions provide significant insights into the regional distribution of major CH4 sources. Direct airborne eddy covariance measurements provide a unique opportunity to constrain CH 4 emissions in the Central Valley over regional spatial scales that are not directly observable by ground-based methods. Airborne observations provide identification of 'hotspots' and under-inventoried CH4 sources, while airborne eddy covariance enables quantification of emissions from those area sources that are largely composed of arbitrarily located minor point sources (e.g. dairies and oil fields). The top-down analysis provides confirmation of the dominance of dairy and livestock source for methane emissions in California. Minor but significant contributions to methane emissions are observed from oil and gas extraction, rice cultivation and wetlands; the estimates for these sectors being either negligible (e.g. wetlands) or highly uncertain (e.g. oil and gas extraction) in the statewide inventories and probably underestimated as a proportion of the total inventory. The top-down analysis also confirms agricultural soil management and dairy and livestock as the two principal sources of N2O consistent with the inventory, but shows that N2O contributions attributed to the transportation sector are overestimated in the statewide inventory. These new top down constraints should be used to correct these errors in the current bottom-up inventory, which is a critical step for future assessments of the efficacy of emission reduction regulations. Particularly, measurement techniques like vehicle dynamometer emission calculations (for transportation sources), source-specific short range ground-based inverse dispersion (for dairy and livestock sources), airborne eddy covariance and airborne mass balance approach based emissions estimation (over oil and gas fields) and ground based eddy-covariance (for wetlands and agriculture sector) can be used effectively to generate direct emissions estimates for methane and nitrous oxide that help update and improve the accuracy of the state inventory.

  16. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study.

    PubMed

    Song, Fujian; Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-08-16

    To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. The study included 112 independent trial networks (including 1552 trials with 478,775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence.

  17. Inconsistency between direct and indirect comparisons of competing interventions: meta-epidemiological study

    PubMed Central

    Xiong, Tengbin; Parekh-Bhurke, Sheetal; Loke, Yoon K; Sutton, Alex J; Eastwood, Alison J; Holland, Richard; Chen, Yen-Fu; Glenny, Anne-Marie; Deeks, Jonathan J; Altman, Doug G

    2011-01-01

    Objective To investigate the agreement between direct and indirect comparisons of competing healthcare interventions. Design Meta-epidemiological study based on sample of meta-analyses of randomised controlled trials. Data sources Cochrane Database of Systematic Reviews and PubMed. Inclusion criteria Systematic reviews that provided sufficient data for both direct comparison and independent indirect comparisons of two interventions on the basis of a common comparator and in which the odds ratio could be used as the outcome statistic. Main outcome measure Inconsistency measured by the difference in the log odds ratio between the direct and indirect methods. Results The study included 112 independent trial networks (including 1552 trials with 478 775 patients in total) that allowed both direct and indirect comparison of two interventions. Indirect comparison had already been explicitly done in only 13 of the 85 Cochrane reviews included. The inconsistency between the direct and indirect comparison was statistically significant in 16 cases (14%, 95% confidence interval 9% to 22%). The statistically significant inconsistency was associated with fewer trials, subjectively assessed outcomes, and statistically significant effects of treatment in either direct or indirect comparisons. Owing to considerable inconsistency, many (14/39) of the statistically significant effects by direct comparison became non-significant when the direct and indirect estimates were combined. Conclusions Significant inconsistency between direct and indirect comparisons may be more prevalent than previously observed. Direct and indirect estimates should be combined in mixed treatment comparisons only after adequate assessment of the consistency of the evidence. PMID:21846695

  18. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  19. Estimate of the direct and indirect annual cost of bacterial conjunctivitis in the United States

    PubMed Central

    2009-01-01

    Background The aim of this study was to estimate both the direct and indirect annual costs of treating bacterial conjunctivitis (BC) in the United States. This was a cost of illness study performed from a U.S. healthcare payer perspective. Methods A comprehensive review of the medical literature was supplemented by data on the annual incidence of BC which was obtained from an analysis of the National Ambulatory Medical Care Survey (NAMCS) database for the year 2005. Cost estimates for medical visits and laboratory or diagnostic tests were derived from published Medicare CPT fee codes. The cost of prescription drugs was obtained from standard reference sources. Indirect costs were calculated as those due to lost productivity. Due to the acute nature of BC, no cost discounting was performed. All costs are expressed in 2007 U.S. dollars. Results The number of BC cases in the U.S. for 2005 was estimated at approximately 4 million yielding an estimated annual incidence rate of 135 per 10,000. Base-case analysis estimated the total direct and indirect cost of treating patients with BC in the United States at $ 589 million. One- way sensitivity analysis, assuming either a 20% variation in the annual incidence of BC or treatment costs, generated a cost range of $ 469 million to $ 705 million. Two-way sensitivity analysis, assuming a 20% variation in both the annual incidence of BC and treatment costs occurring simultaneously, resulted in an estimated cost range of $ 377 million to $ 857 million. Conclusion The economic burden posed by BC is significant. The findings may prove useful to decision makers regarding the allocation of healthcare resources necessary to address the economic burden of BC in the United States. PMID:19939250

  20. What is the impact of different VLBI analysis setups of the tropospheric delay on precipitable water vapor trends?

    NASA Astrophysics Data System (ADS)

    Balidakis, Kyriakos; Nilsson, Tobias; Heinkelmann, Robert; Glaser, Susanne; Zus, Florian; Deng, Zhiguo; Schuh, Harald

    2017-04-01

    The quality of the parameters estimated by global navigation satellite systems (GNSS) and very long baseline interferometry (VLBI) are distorted by erroneous meteorological observations applied to model the propagation delay in the electrically neutral atmosphere. For early VLBI sessions with poor geometry, unsuitable constraints imposed on the a priori tropospheric gradients is a source of additional hassle of VLBI analysis. Therefore, climate change indicators deduced from the geodetic analysis, such as the long-term precipitable water vapor (PWV) trends, are strongly affected. In this contribution we investigate the impact of different modeling and parameterization of the propagation delay in the troposphere on the estimates of long-term PWV trends from geodetic VLBI analysis results. We address the influence of the meteorological data source, and of the a priori non-hydrostatic delays and gradients employed in the VLBI processing, on the estimated PWV trends. In particular, we assess the effect of employing temperature and pressure from (i) homogenized in situ observations, (ii) the model levels of the ERA Interim reanalysis numerical weather model and (iii) our own blind model in the style of GPT2w with enhanced parameterization, calculated using the latter data set. Furthermore, we utilize non-hydrostatic delays and gradients estimated from (i) a GNSS reprocessing at GeoForschungsZentrum Potsdam, rigorously considering tropospheric ties, and (ii)) direct ray-tracing through ERA Interim, as additional observations. To evaluate the above, the least-squares module of the VieVS@GFZ VLBI software was appropriately modified. Additionally, we study the noise characteristics of the non-hydrostatic delays and gradients estimated from our VLBI and GNSS analyses as well as from ray-tracing. We have modified the Theil-Sen estimator appropriately to robustly deduce PWV trends from VLBI, GNSS, ray-tracing and direct numerical integration in ERA Interim. We disseminate all our solutions in the latest Tropo-SINEX format.

  1. Modified ensemble Kalman filter for nuclear accident atmospheric dispersion: prediction improved and source estimated.

    PubMed

    Zhang, X L; Su, G F; Yuan, H Y; Chen, J G; Huang, Q Y

    2014-09-15

    Atmospheric dispersion models play an important role in nuclear power plant accident management. A reliable estimation of radioactive material distribution in short range (about 50 km) is in urgent need for population sheltering and evacuation planning. However, the meteorological data and the source term which greatly influence the accuracy of the atmospheric dispersion models are usually poorly known at the early phase of the emergency. In this study, a modified ensemble Kalman filter data assimilation method in conjunction with a Lagrangian puff-model is proposed to simultaneously improve the model prediction and reconstruct the source terms for short range atmospheric dispersion using the off-site environmental monitoring data. Four main uncertainty parameters are considered: source release rate, plume rise height, wind speed and wind direction. Twin experiments show that the method effectively improves the predicted concentration distribution, and the temporal profiles of source release rate and plume rise height are also successfully reconstructed. Moreover, the time lag in the response of ensemble Kalman filter is shortened. The method proposed here can be a useful tool not only in the nuclear power plant accident emergency management but also in other similar situation where hazardous material is released into the atmosphere. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Improved Bayesian Infrasonic Source Localization for regional infrasound

    DOE PAGES

    Blom, Philip S.; Marcillo, Omar; Arrowsmith, Stephen J.

    2015-10-20

    The Bayesian Infrasonic Source Localization (BISL) methodology is examined and simplified providing a generalized method of estimating the source location and time for an infrasonic event and the mathematical framework is used therein. The likelihood function describing an infrasonic detection used in BISL has been redefined to include the von Mises distribution developed in directional statistics and propagation-based, physically derived celerity-range and azimuth deviation models. Frameworks for constructing propagation-based celerity-range and azimuth deviation statistics are presented to demonstrate how stochastic propagation modelling methods can be used to improve the precision and accuracy of the posterior probability density function describing themore » source localization. Infrasonic signals recorded at a number of arrays in the western United States produced by rocket motor detonations at the Utah Test and Training Range are used to demonstrate the application of the new mathematical framework and to quantify the improvement obtained by using the stochastic propagation modelling methods. Moreover, using propagation-based priors, the spatial and temporal confidence bounds of the source decreased by more than 40 per cent in all cases and by as much as 80 per cent in one case. Further, the accuracy of the estimates remained high, keeping the ground truth within the 99 per cent confidence bounds for all cases.« less

  3. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  4. Quantifying the flow rate of the Deepwater Horizon Macondo Well oil spill

    NASA Astrophysics Data System (ADS)

    Camilli, R.; Bowen, A.; Yoerger, D. R.; Whitcomb, L. L.; Techet, A. H.; Reddy, C. M.; Sylva, S.; Seewald, J.; di Iorio, D.; Whoi Flow Rate Measurement Group

    2010-12-01

    The Deepwater Horizon blowout in the Mississippi Canyon block 252 of the Gulf of Mexico created the largest recorded offshore oil spill. The well outflow’s multiple leak sources, turbulent multiphase flow, tendency for hydrate formation, and extreme source depth of 1500 m below the sea surface complicated the quantitative estimation of oil and gas leakage rates. We present methods and results from a U.S. Coast Guard sponsored flow assessment study of the Deepwater Horizon’s damaged blow out preventer and riser. This study utilized a remotely operated vehicle equipped with in-situ acoustic sensors (a Doppler sonar and an imaging multibeam sonar) and isobaric gas-tight fluid samplers to measure directly outflow from the damaged well. Findings from this study indicate oil release rates and total release volume estimates that corroborate estimates made by the federal government’s Flow Rate Technical Group using non-acoustic techniques. The acoustic survey methods reported here provides a means for estimating fluid flow rates in subsurface environments, and are potentially useful for a diverse range of oceanographic applications. Photograph of the Discoverer Enterprise burning natural gas collected from the Macondo well blowout preventer during flow measurement operations. Copyright Wood Hole Oceanographic Institution.

  5. Analyses of atmospheric pollutants in Hong Kong and the Pearl River Delta by observation-based methods

    NASA Astrophysics Data System (ADS)

    Yuan, Zibing

    Despite continuous efforts paid on pollution control by the Hong Kong (HK) environmental authorities in the past decade, the air pollution in HK has been deteriorating in recent years. In this thesis work a variety of observation-based approaches were applied to analyze the air pollutant monitoring data in HK and the Pearl River Delta (PRD) area. The two major pollutants of interest are ozone and respirable suspended particulate (RSP, or PM10), which exceed the Air Quality Objective more frequently. Receptor models serve as powerful tools for source identification, estimation of source contributions, and source localization when incorporated with wind profiles. This thesis work presents the first-ever application of two advanced receptor-models, positive matrix factorization (PMT) and Unmix, on the PM10 and VOCs speciation data in HK. Speciated PM10 data were collected from a monitoring network in HK between July-1998 and Dec-2005. Seven and nine sources were identified by Unmix and PMF10, respectively. Overall, secondary sulfate and vehicle emissions gave the largest contribution to PM10 (27% each), followed by biomass burning/waste incineration (13%) and secondary nitrate (11%). Sources were classified as local and regional based on its seasonal and spatial variations as well as source directional analysis. Regional sources accounted for about 56% of the ambient PM10 mass on an annual basis, and even higher (67%) during winter. Regional contributions also showed an increasing trend, with their annual averaged fraction rising from 53% in 1999 to 64% in 2005. The particulate pollution in HK is therefore sensitive to the regional influence and regional air quality management strategies are crucial in reducing PM level in HK. On the other hand, many species with significant adverse health impacts were produced locally. Local control measures should be strengthened for better protection of public health. Secondary organic carbon (SOC) could be a significant portion of OC in particles. SOC was examined by using PMF-derived source apportionment results and estimated to be sum of OC present in the secondary sources. The annual average SOC in HK was estimated to be 4.1 mugC/m3 while summertime average was 1.8 RgC/m3 and wintertime average was 6.9 ggC/m 3. In comparison with the SOC estimates by the PMF method, the method that uses elemental carbon (EC) as the tracer for primary OC to derive at SOC overestimates by 78-210% for the summer samples and by 9-49% for the winter samples. The overestimation by the EC tracer method was a result of incapability of obtaining a single OC/EC ratio that represented a mixture of primary sources varying in time and space. It was found that SOC and secondary sulfate had their seasonal variation in sync, suggesting common factors that control their formation. The close tracking of SOC and sulfate appears to suggest that in-cloud pathway is also important for SOC formation. Speciated VOCs were obtained in four air quality monitoring stations (AQMSs) in HK from August-2002 to August-2003. Both Unmix and PMF identified five stable sources. Mixed solvents gave the largest contributions ranging from 34% at rural Tap Mun to 52% at urban Central/Western. The wind directional analysis indicates the main source location at the central PRD area. Regional transport accounts for about 19% of the total VOC, while the two local and vehicle-related sources are responsible for 27%. By weighing the abundance and reactivity of each VOC species, mixed solvent use is estimated to be the largest contributor of local ozone, with the contributions ranging from 42% at Tung Chung to 57% at Tap Mun. The next largest is the vehicle exhaust, accounting for about 28% in Yuen Long. Biogenic emission is responsible for nearly 20% of the ozone generation at Tap Mun but this figure is likely underestimated. Distinct secondary inorganic aerosol (SIA) responses are expected to the reduction of different precursors as a result of non-linear chemical reactions involved in its formation. The last part of this thesis work concerns developing a chemical box model to determine the sensitivity of SIA to changes to the emissions of their precursors. The model is composed of three parts. The first part is a time-dependent module to estimate the temporal variation of all species, before and after the emission has been disturbed. The second part is a gas-particle conversion module that partitions the semi-volatile species into the two phases. The last module would then calculate the aerosol forming potential for the entire simulation period. It is estimated that SIA shows the largest response to the reduction of SO2 emission in YL, followed by NH3 and NOx. Significant regional transport of SIA is discovered in YL, limiting the indication of relative effectiveness for controlling different precursors. At the end, future research directions are proposed to better refine and validate the OBM performance for SIA simulation.

  6. Some conservation issues for the dynamical cores of NWP and climate models

    NASA Astrophysics Data System (ADS)

    Thuburn, J.

    2008-03-01

    The rationale for designing atmospheric numerical model dynamical cores with certain conservation properties is reviewed. The conceptual difficulties associated with the multiscale nature of realistic atmospheric flow, and its lack of time-reversibility, are highlighted. A distinction is made between robust invariants, which are conserved or nearly conserved in the adiabatic and frictionless limit, and non-robust invariants, which are not conserved in the limit even though they are conserved by exactly adiabatic frictionless flow. For non-robust invariants, a further distinction is made between processes that directly transfer some quantity from large to small scales, and processes involving a cascade through a continuous range of scales; such cascades may either be explicitly parameterized, or handled implicitly by the dynamical core numerics, accepting the implied non-conservation. An attempt is made to estimate the relative importance of different conservation laws. It is argued that satisfactory model performance requires spurious sources of a conservable quantity to be much smaller than any true physical sources; for several conservable quantities the magnitudes of the physical sources are estimated in order to provide benchmarks against which any spurious sources may be measured.

  7. The structure of the ISM in the Zone of Avoidance by high-resolution multi-wavelength observations

    NASA Astrophysics Data System (ADS)

    Tóth, L. V.; Doi, Y.; Pinter, S.; Kovács, T.; Zahorecz, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Onishi, T.

    2018-05-01

    We estimate the column density of the Galactic foreground interstellar medium (GFISM) in the direction of extragalactic sources. All-sky AKARI FIS infrared sky survey data might be used to trace the GFISM with a resolution of 2 arcminutes. The AKARI based GFISM hydrogen column density estimates are compared with similar quantities based on HI 21cm measurements of various resolution and of Planck results. High spatial resolution observations of the GFISM may be important recalculating the physical parameters of gamma-ray burst (GRB) host galaxies using the updated foreground parameters.

  8. Subdiffraction incoherent optical imaging via spatial-mode demultiplexing: Semiclassical treatment

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-02-01

    I present a semiclassical analysis of a spatial-mode demultiplexing (SPADE) measurement scheme for far-field incoherent optical imaging under the effects of diffraction and photon shot noise. Building on previous results that assume two point sources or the Gaussian point-spread function, I generalize SPADE for a larger class of point-spread functions and evaluate its errors in estimating the moments of an arbitrary subdiffraction object. Compared with the limits to direct imaging set by the Cramér-Rao bounds, the results show that SPADE can offer far superior accuracy in estimating second- and higher-order moments.

  9. TU-H-CAMPUS-IeP1-01: Bias and Computational Efficiency of Variance Reduction Methods for the Monte Carlo Simulation of Imaging Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, D; Badano, A; Sempau, J

    Purpose: Variance reduction techniques (VRTs) are employed in Monte Carlo simulations to obtain estimates with reduced statistical uncertainty for a given simulation time. In this work, we study the bias and efficiency of a VRT for estimating the response of imaging detectors. Methods: We implemented Directed Sampling (DS), preferentially directing a fraction of emitted optical photons directly towards the detector by altering the isotropic model. The weight of each optical photon is appropriately modified to maintain simulation estimates unbiased. We use a Monte Carlo tool called fastDETECT2 (part of the hybridMANTIS open-source package) for optical transport, modified for VRT. Themore » weight of each photon is calculated as the ratio of original probability (no VRT) and the new probability for a particular direction. For our analysis of bias and efficiency, we use pulse height spectra, point response functions, and Swank factors. We obtain results for a variety of cases including analog (no VRT, isotropic distribution), and DS with 0.2 and 0.8 optical photons directed towards the sensor plane. We used 10,000, 25-keV primaries. Results: The Swank factor for all cases in our simplified model converged fast (within the first 100 primaries) to a stable value of 0.9. The root mean square error per pixel for DS VRT for the point response function between analog and VRT cases was approximately 5e-4. Conclusion: Our preliminary results suggest that DS VRT does not affect the estimate of the mean for the Swank factor. Our findings indicate that it may be possible to design VRTs for imaging detector simulations to increase computational efficiency without introducing bias.« less

  10. Reducing the cost of Ca-based direct air capture of CO2.

    PubMed

    Zeman, Frank

    2014-10-07

    Direct air capture, the chemical removal of CO2 directly from the atmosphere, may play a role in mitigating future climate risk or form the basis of a sustainable transportation infrastructure. The current discussion is centered on the estimated cost of the technology and its link to "overshoot" trajectories, where atmospheric CO2 levels are actively reduced later in the century. The American Physical Society (APS) published a report, later updated, estimating the cost of a one million tonne CO2 per year air capture facility constructed today that highlights several fundamental concepts of chemical air capture. These fundamentals are viewed through the lens of a chemical process that cycles between removing CO2 from the air and releasing the absorbed CO2 in concentrated form. This work builds on the APS report to investigate the effect of modifications to the air capture system based on suggestions in the report and subsequent publications. The work shows that reduced carbon electricity and plastic packing materials (for the contactor) may have significant effects on the overall price, reducing the APS estimate from $610 to $309/tCO2 avoided. Such a reduction does not challenge postcombustion capture from point sources, estimated at $80/tCO2, but does make air capture a feasible alternative for the transportation sector and a potential negative emissions technology. Furthermore, air capture represents atmospheric reductions rather than simply avoided emissions.

  11. The importance of carbon footprint estimation boundaries.

    PubMed

    Matthews, H Scott; Hendrickson, Chris T; Weber, Christopher L

    2008-08-15

    Because of increasing concern about global climate change and carbon emissions as a causal factor, many companies and organizations are pursuing "carbon footprint" projects to estimate their own contributions to global climate change. Protocol definitions from carbon registries help organizations analyze their footprints. The scope of these protocols varies but generally suggests estimating only direct emissions and emissions from purchased energy, with less focus on supply chain emissions. In contrast approaches based on comprehensive environmental life-cycle assessment methods are available to track total emissions across the entire supply chain, and experience suggests that following narrowly defined estimation protocols will generally lead to large underestimates of carbon emissions for providing products and services. Direct emissions from an industry are, on average, only 14% of the total supply chain carbon emissions (often called Tier 1 emissions), and direct emissions plus industry energy inputs are, on average, only 26% of the total supply chain emissions (often called Tier 1 and 2 emissions). Without a full knowledge of their footprints, firms will be unable to pursue the most cost-effective carbon mitigation strategies. We suggest that firms use the screening-level analysis described here to set the bounds of their footprinting strategy to ensure that they do not ignore large sources of environmental effects across their supply chains. Such information can help firms pursue carbon and environmental emission mitigation projects not only within their own plants but also across their supply chain.

  12. Preliminary results from direct-to-facility vaccine deliveries in Kano, Nigeria.

    PubMed

    Aina, Muyi; Igbokwe, Uchenna; Jegede, Leke; Fagge, Rabiu; Thompson, Adam; Mahmoud, Nasir

    2017-04-19

    As part of its vaccine supply chain redesign efforts, Kano state now pushes vaccines directly from 6 state stores to primary health centers equipped with solar refrigerators. Our objective is to describe preliminary results from the first 20months of Kano's direct vaccine delivery operations. This is a retrospective review of Kano's direct vaccine delivery program. We analyzed trends in health facility vaccine stock levels, and examined the relationship between stock-out rates and each of cascade vaccine deliveries and timeliness of deliveries. Analysis of vaccination trends was based on administrative data from 27 sentinel health facilities. Costs for both the in-sourced and out-sourced approaches were estimated using a bottoms-up model-based approach. Overall stock adequacy increased from 54% in the first delivery cycle to 68% by cycle 33. Conversely, stock-out rates decreased from 41% to 10% over the same period. Similar trends were observed in the out-sourced and in-sourced programs. Stock-out rates rose incrementally with increasing number of cascade facilities, and delays in vaccine deliveries correlated strongly with stock-out rates. Recognizing that stock availability is one of many factors contributing to vaccinations, we nonetheless compared pre- and post- direct deliveries vaccinations in sentinel facilities, and found statistically significant upward trends for 4 out of 6 antigens. 1 antigen (measles) showed an upward trend that was not statistically significant. Hepatitis b vaccinations declined during the period. Overall, there appeared to be a one-year lag between commencement of direct deliveries and the increase in number of vaccinations. Weighted average cost per delivery is US$29.8 and cost per child immunized is US$0.7 per year. Direct vaccine delivery to health facilities in Kano, through a streamlined architecture, has resulted in decreased stock-outs and improved stock adequacy. Concurrent operation of insourced and outsourced programs has enabled Kano build in-house logistics capabilities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Estimation of Phosphorus Emissions in the Upper Iguazu Basin (brazil) Using GIS and the More Model

    NASA Astrophysics Data System (ADS)

    Acosta Porras, E. A.; Kishi, R. T.; Fuchs, S.; Hilgert, S.

    2016-06-01

    Pollution emissions into the drainage basin have direct impact on surface water quality. These emissions result from human activities that turn into pollution loads when they reach the water bodies, as point or diffuse sources. Their pollution potential depends on the characteristics and quantity of the transported materials. The estimation of pollution loads can assist decision-making in basin management. Knowledge about the potential pollution sources allows for a prioritization of pollution control policies to achieve the desired water quality. Consequently, it helps avoiding problems such as eutrophication of water bodies. The focus of the research described in this study is related to phosphorus emissions into river basins. The study area is the upper Iguazu basin that lies in the northeast region of the State of Paraná, Brazil, covering about 2,965 km2 and around 4 million inhabitants live concentrated on just 16% of its area. The MoRE (Modeling of Regionalized Emissions) model was used to estimate phosphorus emissions. MoRE is a model that uses empirical approaches to model processes in analytical units, capable of using spatially distributed parameters, covering both, emissions from point sources as well as non-point sources. In order to model the processes, the basin was divided into 152 analytical units with an average size of 20 km2. Available data was organized in a GIS environment. Using e.g. layers of precipitation, the Digital Terrain Model from a 1:10000 scale map as well as soils and land cover, which were derived from remote sensing imagery. Further data is used, such as point pollution discharges and statistical socio-economic data. The model shows that one of the main pollution sources in the upper Iguazu basin is the domestic sewage that enters the river as point source (effluents of treatment stations) and/or as diffuse pollution, caused by failures of sanitary sewer systems or clandestine sewer discharges, accounting for about 56% of the emissions. Second significant shares of emissions come from direct runoff or groundwater, being responsible for 32% of the total emissions. Finally, agricultural erosion and industry pathways represent 12% of emissions. This study shows that MoRE is capable of producing valid emission calculation on a relatively reduced input data basis.

  14. Methane Flux Estimation from Point Sources using GOSAT Target Observation: Detection Limit and Improvements with Next Generation Instruments

    NASA Astrophysics Data System (ADS)

    Kuze, A.; Suto, H.; Kataoka, F.; Shiomi, K.; Kondo, Y.; Crisp, D.; Butz, A.

    2017-12-01

    Atmospheric methane (CH4) has an important role in global radiative forcing of climate but its emission estimates have larger uncertainties than carbon dioxide (CO2). The area of anthropogenic emission sources is usually much smaller than 100 km2. The Thermal And Near infrared Sensor for carbon Observation Fourier-Transform Spectrometer (TANSO-FTS) onboard the Greenhouse gases Observing SATellite (GOSAT) has measured CO2 and CH4 column density using sun light reflected from the earth's surface. It has an agile pointing system and its footprint can cover 87-km2 with a single detector. By specifying pointing angles and observation time for every orbit, TANSO-FTS can target various CH4 point sources together with reference points every 3 day over years. We selected a reference point that represents CH4 background density before or after targeting a point source. By combining satellite-measured enhancement of the CH4 column density and surface measured wind data or estimates from the Weather Research and Forecasting (WRF) model, we estimated CH4emission amounts. Here, we picked up two sites in the US West Coast, where clear sky frequency is high and a series of data are available. The natural gas leak at Aliso Canyon showed a large enhancement and its decrease with time since the initial blowout. We present time series of flux estimation assuming the source is single point without influx. The observation of the cattle feedlot in Chino, California has weather station within the TANSO-FTS footprint. The wind speed is monitored continuously and the wind direction is stable at the time of GOSAT overpass. The large TANSO-FTS footprint and strong wind decreases enhancement below noise level. Weak wind shows enhancements in CH4, but the velocity data have large uncertainties. We show the detection limit of single samples and how to reduce uncertainty using time series of satellite data. We will propose that the next generation instruments for accurate anthropogenic CO2 and CH4 flux estimation have improve spatial resolution (˜1km2 ) to further enhance column density changes. We also propose adding imaging capability to monitor plume orientation. We will present laboratory model results and a sampling pattern optimization study that combines local emission source and global survey observations.

  15. Earthquake Source Parameter Estimates for the Charlevoix and Western Quebec Seismic Zones in Eastern Canada

    NASA Astrophysics Data System (ADS)

    Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.

    2017-12-01

    The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension estimates from both methods generally agree. We observe higher corner frequencies and higher stress drops (ranging from 20 to 70 MPa) typical of intraplate seismicity in comparison with interplate seismicity. We follow similar approaches to studying 25 MN 3+ events reported in the WQSZ using data recorded by the CNSN and USArray Transportable Array.

  16. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    NASA Astrophysics Data System (ADS)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.; Kouzes, Richard T.; Kulisek, Jonathan A.; Robinson, Sean M.; Wittman, Richard A.

    2015-10-01

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide an estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.

  17. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  18. Quantification of A Tropical Missing Source From Ocean For The Carbonyl Sulfide Global Budget

    NASA Astrophysics Data System (ADS)

    Kuai, Le; Worden, John; Campbell, Elliott; Kulawik, Susan; Lee, Meemong; Montzka, Stephen; Berry, Joe; Baker, Ian; Denning, Scott; Kawa, Randy; Bian, Huisheng; Yung, Yuk

    2015-04-01

    Quantifying the carbonyl sulfide (OCS) surface fluxes contributes to the understanding of both sulfur cycle and carbon cycle. Although the major sources and sinks of OCS are well recognized, the uncertainties of individual types of the fluxes remain large. With the understanding of a large underestimate of ecosystem uptake, it suggests a large missing ocean source over tropical region to compensate the increased sink. However before AURA Tropospheric Emissions Spectrometer (TES) OCS data is released, no direct measurements have been taken to test this hypothesis. In this study, we performed a flux inversion to update the fluxes from TES OCS. Then we compared three experimental GEOS-Chem forward model runs driven by different fluxes based on TES inversion to HIPPO aircraft estimates in free troposphere and also to NOAA near surface observations. The TES data supports the hypothesis that a large source from tropical ocean is missing in the current OCS global budget and suggests that the source is even larger than that proposed in Berry et al., (2013). Consequently, it leads to a larger land uptake and increase the estimates of GPP. TES data also suggests the missing oceanic source is not symmetric about equator. It is strong and distributed further north of the equator (to 40°N) but weak and narrow south of the equator (to 20°S).

  19. Zonal Aerosol Direct and Indirect Radiative Forcing using Combined CALIOP, CERES, CloudSat, and CERES Data

    NASA Astrophysics Data System (ADS)

    Miller, W. F.; Kato, S.; Rose, F. G.; Sun-Mack, S.

    2009-12-01

    Under the NASA Energy and Water Cycle System (NEWS) program, cloud and aerosol properties derived from CALIPSO, CloudSat, and MODIS data then matched to the CERES footprint are used for irradiance profile computations. Irradiance profiles are included in the publicly available product, CCCM. In addition to the MODIS and CALIPSO generated aerosol, aerosol optical thickness is calculated over ocean by processing MODIS radiance through the Stowe-Ignatov algorithm. The CERES cloud mask and properties algorithm are use with MODIS radiance to provide additional cloud information to accompany the actively sensed data. The passively sensed data is the only input to the standard CERES radiative flux products. The combined information is used as input to the NASA Langley Fu-Liou radiative transfer model to determine vertical profiles and Top of Atmosphere shortwave and longwave flux for pristine, all-sky, and aerosol conditions for the special data product. In this study, the three sources of aerosol optical thickness will be compared directly and their influence on the calculated and measured TOA fluxes. Earlier studies indicate that the largest uncertainty in estimating direct aerosol forcing using aerosol optical thickness derived from passive sensors is caused by cloud contamination. With collocated CALIPSO data, we are able to estimate frequency of occurrence of cloud contamination, effect on the aerosol optical thickness and direct radiative effect estimates.

  20. The Chandra Source Catalog: User Interface

    NASA Astrophysics Data System (ADS)

    Bonaventura, Nina; Evans, I. N.; Harbo, P. N.; Rots, A. H.; Tibbetts, M. S.; Van Stone, D. W.; Zografou, P.; Anderson, C. S.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Glotfelty, K. J.; Grier, J. D.; Hain, R.; Hall, D. M.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Winkelman, S. L.

    2009-01-01

    The Chandra Source Catalog (CSC) is the definitive catalog of all X-ray sources detected by Chandra. The CSC is presented to the user in two tables: the Master Chandra Source Table and the Table of Individual Source Observations. Each distinct X-ray source identified in the CSC is represented by a single master source entry and one or more individual source entries. If a source is unaffected by confusion and pile-up in multiple observations, the individual source observations are merged to produce a master source. In each table, a row represents a source, and each column a quantity that is officially part of the catalog. The CSC contains positions and multi-band fluxes for the sources, as well as derived spatial, spectral, and temporal source properties. The CSC also includes associated source region and full-field data products for each source, including images, photon event lists, light curves, and spectra. The master source properties represent the best estimates of the properties of a source, and are presented in the following categories: Position and Position Errors, Source Flags, Source Extent and Errors, Source Fluxes, Source Significance, Spectral Properties, and Source Variability. The CSC Data Access GUI provides direct access to the source properties and data products contained in the catalog. The user may query the catalog database via a web-style search or an SQL command-line query. Each query returns a table of source properties, along with the option to browse and download associated data products. The GUI is designed to run in a web browser with Java version 1.5 or higher, and may be accessed via a link on the CSC website homepage (http://cxc.harvard.edu/csc/). As an alternative to the GUI, the contents of the CSC may be accessed directly through a URL, using the command-line tool, cURL. Support: NASA contract NAS8-03060 (CXC).

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N.; Urban, Marie L.; Duchscherer, Samantha E.

    Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the artmore » by introducing the Population Data Tables (PDT), a Bayesian based informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000ft2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.Understanding building occupancy is critical to a wide array of applications including natural hazards loss analysis, green building technologies, and population distribution modeling. Due to the expense of directly monitoring buildings, scientists rely in addition on a wide and disparate array of ancillary and open source information including subject matter expertise, survey data, and remote sensing information. These data are fused using data harmonization methods which refer to a loose collection of formal and informal techniques for fusing data together to create viable content for building occupancy estimation. In this paper, we add to the current state of the art by introducing the Population Data Tables (PDT), a Bayesian model and informatics system for systematically arranging data and harmonization techniques into a consistent, transparent, knowledge learning framework that retains in the final estimation uncertainty emerging from data, expert judgment, and model parameterization. PDT probabilistically estimates ambient occupancy in units of people/1000 ft 2 for over 50 building types at the national and sub-national level with the goal of providing global coverage. The challenge of global coverage led to the development of an interdisciplinary geospatial informatics system tool that provides the framework for capturing, storing, and managing open source data, handling subject matter expertise, carrying out Bayesian analytics as well as visualizing and exporting occupancy estimation results. We present the PDT project, situate the work within the larger community, and report on the progress of this multi-year project.« less

  2. Novel directed search strategy to detect continuous gravitational waves from neutron stars in low- and high-eccentricity binary systems

    NASA Astrophysics Data System (ADS)

    Leaci, Paola; Astone, Pia; D'Antonio, Sabrina; Frasca, Sergio; Palomba, Cristiano; Piccinni, Ornella; Mastrogiovanni, Simone

    2017-06-01

    We describe a novel, very fast and robust, directed search incoherent method (which means that the phase information is lost) for periodic gravitational waves from neutron stars in binary systems. As a directed search, we assume the source sky position to be known with enough accuracy, but all other parameters (including orbital ones) are supposed to be unknown. We exploit the frequency modulation due to source orbital motion to unveil the signal signature by commencing from a collection of time and frequency peaks (the so-called "peakmap"). We validate our algorithm (pipeline), adding 131 artificial continuous-wave signals from pulsars in binary systems to simulated detector Gaussian noise, characterized by a power spectral density Sh=4 ×10-24 Hz-1 /2 in the frequency interval [70, 200] Hz, which is overall commensurate with the advanced detector design sensitivities. The pipeline detected 128 signals, and the weakest signal injected (added) and detected has a gravitational-wave strain amplitude of ˜10-24, assuming one month of gapless data collected by a single advanced detector. We also provide sensitivity estimations, which show that, for a single-detector data covering one month of observation time, depending on the source orbital Doppler modulation, we can detect signals with an amplitude of ˜7 ×10-25. By using three detectors, and one year of data, we would easily gain a factor 3 in sensitivity, translating into being able to detect weaker signals. We also discuss the parameter estimate proficiency of our method, as well as computational budget: sifting one month of single-detector data and 131 Hz-wide frequency range takes roughly 2.4 CPU hours. Hence, the current procedure can be readily applied in ally-sky schemes, sieving in parallel as many sky positions as permitted by the available computational power. Finally, we introduce (ongoing and future) approaches to attain sensitivity improvements and better accuracy on parameter estimates in view of the use on real advanced detector data.

  3. Respiratory syncytial virus--the unrecognised cause of health and economic burden among young children in Australia.

    PubMed

    Ranmuthugala, Geetha; Brown, Laurie; Lidbury, Brett A

    2011-06-01

    Respiratory syncytial virus (RSV) presents very similar to influenza and is the principle cause of bronchiolitis in infants and young children worldwide. Yet, there is no systematic monitoring of RSV activity in Australia. This study uses existing published data sources to estimate incidence, hospitalisation rates, and associated costs of RSV among young children in Australia. Published reports from the Laboratory Virology and Serology Reporting Scheme, a passive voluntary surveillance system, and the National Hospital Morbidity Dataset were used to estimate RSV-related age-specific hospitalisation rates in New South Wales and Australia. These estimates and national USA estimates of RSV-related hospitalisation rates were applied to Australian population data to estimate RSV incidence in Australia. Direct economic burden was estimated by applying cost estimates used to derive economic cost associated with the influenza virus. The estimated RSV-related hospitalisation rates ranged from 2.2-4.5 per 1,000 among children less than 5 years of age to 8.7-17.4 per 1,000 among infants. Incidence ranged from 110.0-226.5 per 1,000 among the under five age group to 435.0-869.0 per 1,000 among infants. The total annual direct healthcare cost was estimated to be between $24 million and $50 million. Comparison with the health burdens attributed to the influenza virus and rotavirus suggests that the disease burden caused by RSV is potentially much higher. The limitations associated with using a passive surveillance system to estimate disease burden, and the need to explore further assessments and to monitor RSV activity are discussed.

  4. Acoustic sources of opportunity in the marine environment - Applied to source localization and ocean sensing

    NASA Astrophysics Data System (ADS)

    Verlinden, Christopher M.

    Controlled acoustic sources have typically been used for imaging the ocean. These sources can either be used to locate objects or characterize the ocean environment. The processing involves signal extraction in the presence of ambient noise, with shipping being a major component of the latter. With the advent of the Automatic Identification System (AIS) which provides accurate locations of all large commercial vessels, these major noise sources can be converted from nuisance to beacons or sources of opportunity for the purpose of studying the ocean. The source localization method presented here is similar to traditional matched field processing, but differs in that libraries of data-derived measured replicas are used in place of modeled replicas. In order to account for differing source spectra between library and target vessels, cross-correlation functions are compared instead of comparing acoustic signals directly. The library of measured cross-correlation function replicas is extrapolated using waveguide invariant theory to fill gaps between ship tracks, fully populating the search grid with estimated replicas allowing for continuous tracking. In addition to source localization, two ocean sensing techniques are discussed in this dissertation. The feasibility of estimating ocean sound speed and temperature structure, using ship noise across a drifting volumetric array of hydrophones suspended beneath buoys, in a shallow water marine environment is investigated. Using the attenuation of acoustic energy along eigenray paths to invert for ocean properties such as temperature, salinity, and pH is also explored. In each of these cases, the theory is developed, tested using numerical simulations, and validated with data from acoustic field experiments.

  5. Are estimates of wind characteristics based on measurements with Pitot tubes and GNSS receivers mounted on consumer-grade unmanned aerial vehicles applicable in meteorological studies?

    PubMed

    Niedzielski, Tomasz; Skjøth, Carsten; Werner, Małgorzata; Spallek, Waldemar; Witek, Matylda; Sawiński, Tymoteusz; Drzeniecka-Osiadacz, Anetta; Korzystka-Muskała, Magdalena; Muskała, Piotr; Modzel, Piotr; Guzikowski, Jakub; Kryza, Maciej

    2017-09-01

    The objective of this paper is to empirically show that estimates of wind speed and wind direction based on measurements carried out using the Pitot tubes and GNSS receivers, mounted on consumer-grade unmanned aerial vehicles (UAVs), may accurately approximate true wind parameters. The motivation for the study is that a growing number of commercial and scientific UAV operations may soon become a new source of data on wind speed and wind direction, with unprecedented spatial and temporal resolution. The feasibility study was carried out within an isolated mountain meadow of Polana Izerska located in the Izera Mountains (SW Poland) during an experiment which aimed to compare wind characteristics measured by several instruments: three UAVs (swinglet CAM, eBee, Maja) equipped with the Pitot tubes and GNSS receivers, wind speed and direction meters mounted at 2.5 and 10 m (mast), conventional weather station and vertical sodar. The three UAVs performed seven missions along spiral-like trajectories, most reaching 130 m above take-off location. The estimates of wind speed and wind direction were found to agree between UAVs. The time series of wind speed measured at 10 m were extrapolated to flight altitudes recorded at a given time so that a comparison was made feasible. It was found that the wind speed estimates provided by the UAVs on a basis of the Pitot tube/GNSS data are in agreement with measurements carried out using dedicated meteorological instruments. The discrepancies were recorded in the first and last phases of UAV flights.

  6. Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007

    2014-01-15

    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less

  7. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  8. The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.

  9. An improved adaptive weighting function method for State Estimation in Power Systems with VSC-MTDC

    NASA Astrophysics Data System (ADS)

    Zhao, Kun; Yang, Xiaonan; Lang, Yansheng; Song, Xuri; Wang, Minkun; Luo, Yadi; Wu, Lingyun; Liu, Peng

    2017-04-01

    This paper presents an effective approach for state estimation in power systems that include multi-terminal voltage source converter based high voltage direct current (VSC-MTDC), called improved adaptive weighting function method. The proposed approach is simplified in which the VSC-MTDC system is solved followed by the AC system. Because the new state estimation method only changes the weight and keeps the matrix dimension unchanged. Accurate and fast convergence of AC/DC system can be realized by adaptive weight function method. This method also provides the technical support for the simulation analysis and accurate regulation of AC/DC system. Both the oretical analysis and numerical tests verify practicability, validity and convergence of new method.

  10. Reliable Top-Left Light Convention Starts With Early Renaissance: An Extensive Approach Comprising 10k Artworks

    PubMed Central

    Carbon, Claus-Christian; Pastukhov, Alexander

    2018-01-01

    Art history claims that Western art shows light from the top left, which has been repeatedly shown with narrow image sets and simplistic research methods. Here we employed a set of 10,000 pictures for which participants estimated the direction of light plus their confidence of estimation. From 1420 A.D., the onset of Early Renaissance, until 1900 A.D., we revealed a clear preference for painting light from the top left—within the same period, we observed the highest confidence in such estimations of the light source. One sentence summary This study demonstrates a robust preference for painting light from the top left for Western art history, starting from Early Renaissance until 1900. PMID:29686636

  11. Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution

    PubMed Central

    Park, Yeonseok; Choi, Anthony

    2017-01-01

    The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625

  12. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization

    PubMed Central

    Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan

    2017-01-01

    In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770

  14. Terrestrial black holes as sources of super-high energy radiation

    NASA Astrophysics Data System (ADS)

    Trofimenko, A. P.; Gurin, V. S.

    1993-04-01

    The study proposes small black holes which can be located in the earth's interior as sources of superhigh energy radiation; their origin is not constrained to the big bang. The intensity and spectrum of massless and massive particle radiation due to the Hawking effect for black holes with masses of 10 exp 8 to 10 exp 16 are estimated. The possibility of their detection according to a number of features (high particle energies, thermal energetic spectrum, transientness or an explicit trend to intensity and energy increase, and some expressed direction of emission associated with source localization) is explored. The rates of the radiation of massless particles with spin-1/2 and with spin-1 are illustrated in graphic form.

  15. Gridded anthropogenic emissions inventory and atmospheric transport of carbonyl sulfide in the U.S.: U.S. Anthropogenic COS Source and Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zumkehr, Andrew; Hilton, Timothy W.; Whelan, Mary

    Carbonyl sulfide (COS or OCS), the most abundant sulfur containing gas in the troposphere, has recently emerged as a potentially important atmospheric tracer for the carbon cycle. Atmospheric inverse modeling studies may be able to use existing tower, airborne, and satellite observations of COS to infer information about photosynthesis. However, such analysis relies on gridded anthropogenic COS source estimates that are largely based on industry activity data from over three decades ago. Here we use updated emission factor data and industry activity data to develop a gridded inventory with a 0.1 degree resolution for the U.S. domain. The inventory includesmore » the primary anthropogenic COS sources including direct emissions from the coal and aluminum industries as well as indirect sources from industrial carbon disulfide emissions. Compared to the previously published inventory, we found that the total anthropogenic source (direct and indirect) is 47% smaller. Using this new gridded inventory to drive the STEM/WRF atmospheric transport model, we found that the anthropogenic contribution to COS variation in the troposphere is small relative to the biosphere influence, which is encouraging of carbon cycle applications in this region. Additional anthropogenic sectors with highly uncertain emission factors require further field measurements.« less

  16. The Influence of Materials of Electrodes of Sensitized Solar Cells on Their Capacitive and Electrical Characteristics

    NASA Astrophysics Data System (ADS)

    Lazarenko, P. I.; Kozyukhin, S. A.; Mokshina, A. I.; Sherchenkov, A. A.; Patrusheva, T. N.; Irgashev, R. A.; Lebedev, E. A.; Kozik, V. V.

    2018-05-01

    An estimation is made of the internal capacitance of sensitized solar cells (SSCs) manufactured by the method of extraction pyrolysis. The structures under study are characterized by a hysteresis in the current-voltage characteristic obtained in the direct and reverse modes of voltage variation. The investigations of SSCs demonstrate a high inertness of the parameters under connection and disconnection of the light source. The use of a transparent conductive ITO-electrode, manufactured by the extraction pyrolysis, increases the external capacitance of the cell and decelerates the processes of current decay after the light source connection compared to the commercial FTO-electrode. The values of charges, capacitances, and SSC charge conservation efficiencies are calculated and the internal resistance of the SSCs under study is estimated. According to the estimations performed, the specimen with an ITO-layer possesses a capacitance equal to C1 = 1.23·10-3 F, which is by two orders of magnitude higher than that of the specimen with a FTO-layer (C2 = 2.06·10-5 F).

  17. Estimated hepatitis C prevalence and key population sizes in San Francisco: A foundation for elimination.

    PubMed

    Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F

    2018-01-01

    Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.

  18. Source apportionment of polycyclic aromatic hydrocarbons in Louisiana

    NASA Astrophysics Data System (ADS)

    Han, F.; Zhang, H.

    2017-12-01

    Polycyclic aromatic hydrocarbons (PAHs) in the environment are of significant concern due to their high toxicity that may result in adverse health effects. PAHs measurements at the limited air quality monitoring stations alone are insufficient to gain a complete concept of ambient PAH levels. This study simulates the concentrations of PAHs in Louisiana and identifies the major emission sources. Speciation profiles for PAHs were prepared using data assembled from existing emission profile databases. The Sparse Matrix Operator Kernel Emission (SMOKE) model was used to generate the estimated gridded emissions of 16 priority PAH species directly associated with health risks. The estimated emissions were then applied to simulate ambient concentrations of PAHs in Louisiana for January, April, July and October 2011 using the Community Multiscale Air Quality (CMAQ) model (v5.0.1). Through the formation, transport and deposition of PAHs species, the concentrations of PAHs species in gas phase and particulate phase were obtained. The spatial and temporal variations were analyzed and contributions of both local and regional major sources were quantified. This study provides important information for the prevention and treatment of PAHs in Louisiana.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007

    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less

  20. Measurement of the local food environment: a comparison of existing data sources.

    PubMed

    Bader, Michael D M; Ailshire, Jennifer A; Morenoff, Jeffrey D; House, James S

    2010-03-01

    Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses-drugstores, liquor stores, bars, convenience stores, restaurants, and grocers-located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design.

  1. MATLAB-based algorithm to estimate depths of isolated thin dike-like sources using higher-order horizontal derivatives of magnetic anomalies.

    PubMed

    Ekinci, Yunus Levent

    2016-01-01

    This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.

  2. Soil concentrations, occurrence, sources and estimation of air-soil exchange of polychlorinated biphenyls in Indian cities.

    PubMed

    Chakraborty, Paromita; Zhang, Gan; Li, Jun; Selvaraj, Sakthivel; Breivik, Knut; Jones, Kevin C

    2016-08-15

    Past studies have shown potentially increasing levels of polychlorinated biphenyls (PCBs) in the Indian environment. This is the first attempt to investigate the occurrence of PCBs in surface soil and estimate diffusive air-soil exchange, both on a regional scale as well as at local level within the metropolitan environment of India. From the north, New Delhi and Agra, east, Kolkata, west, Mumbai and Goa and Chennai and Bangalore in the southern India were selected for this study. 33 PCB congeners were quantified in surface soil and possible sources were derived using positive matrix factorization model. Net flux directions of PCBs were estimated in seven major metropolitan cities of India along urban-suburban-rural transects. Mean Σ33PCBs concentration in soil (12ng/g dry weight) was nearly twice the concentration found in global background soil, but in line with findings from Pakistan and urban sites of China. Higher abundance of the heavier congeners (6CB-8CB) was prevalent mostly in the urban centers. Cities like Chennai, Mumbai and Kolkata with evidence of ongoing PCB sources did not show significant correlation with soil organic carbon (SOC). This study provides evidence that soil is acting as sink for heavy weight PCB congeners and source for lighter congeners. Atmospheric transport is presumably a controlling factor for occurrence of PCBs in less polluted sites of India. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A methodological framework for assessing agreement between cost-effectiveness outcomes estimated using alternative sources of data on treatment costs and effects for trial-based economic evaluations.

    PubMed

    Achana, Felix; Petrou, Stavros; Khan, Kamran; Gaye, Amadou; Modi, Neena

    2018-01-01

    A new methodological framework for assessing agreement between cost-effectiveness endpoints generated using alternative sources of data on treatment costs and effects for trial-based economic evaluations is proposed. The framework can be used to validate cost-effectiveness endpoints generated from routine data sources when comparable data is available directly from trial case report forms or from another source. We illustrate application of the framework using data from a recent trial-based economic evaluation of the probiotic Bifidobacterium breve strain BBG administered to babies less than 31 weeks of gestation. Cost-effectiveness endpoints are compared using two sources of information; trial case report forms and data extracted from the National Neonatal Research Database (NNRD), a clinical database created through collaborative efforts of UK neonatal services. Focusing on mean incremental net benefits at £30,000 per episode of sepsis averted, the study revealed no evidence of discrepancy between the data sources (two-sided p values >0.4), low probability estimates of miscoverage (ranging from 0.039 to 0.060) and concordance correlation coefficients greater than 0.86. We conclude that the NNRD could potentially serve as a reliable source of data for future trial-based economic evaluations of neonatal interventions. We also discuss the potential implications of increasing opportunity to utilize routinely available data for the conduct of trial-based economic evaluations.

  4. Estimating the prevalence of 26 health-related indicators at neighbourhood level in the Netherlands using structured additive regression.

    PubMed

    van de Kassteele, Jan; Zwakhals, Laurens; Breugelmans, Oscar; Ameling, Caroline; van den Brink, Carolien

    2017-07-01

    Local policy makers increasingly need information on health-related indicators at smaller geographic levels like districts or neighbourhoods. Although more large data sources have become available, direct estimates of the prevalence of a health-related indicator cannot be produced for neighbourhoods for which only small samples or no samples are available. Small area estimation provides a solution, but unit-level models for binary-valued outcomes that can handle both non-linear effects of the predictors and spatially correlated random effects in a unified framework are rarely encountered. We used data on 26 binary-valued health-related indicators collected on 387,195 persons in the Netherlands. We associated the health-related indicators at the individual level with a set of 12 predictors obtained from national registry data. We formulated a structured additive regression model for small area estimation. The model captured potential non-linear relations between the predictors and the outcome through additive terms in a functional form using penalized splines and included a term that accounted for spatially correlated heterogeneity between neighbourhoods. The registry data were used to predict individual outcomes which in turn are aggregated into higher geographical levels, i.e. neighbourhoods. We validated our method by comparing the estimated prevalences with observed prevalences at the individual level and by comparing the estimated prevalences with direct estimates obtained by weighting methods at municipality level. We estimated the prevalence of the 26 health-related indicators for 415 municipalities, 2599 districts and 11,432 neighbourhoods in the Netherlands. We illustrate our method on overweight data and show that there are distinct geographic patterns in the overweight prevalence. Calibration plots show that the estimated prevalences agree very well with observed prevalences at the individual level. The estimated prevalences agree reasonably well with the direct estimates at the municipal level. Structured additive regression is a useful tool to provide small area estimates in a unified framework. We are able to produce valid nationwide small area estimates of 26 health-related indicators at neighbourhood level in the Netherlands. The results can be used for local policy makers to make appropriate health policy decisions.

  5. The AMIDAS Website: An Online Tool for Direct Dark Matter Detection Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Chung-Lin

    2010-02-10

    Following our long-erm work on development of model-independent data analysis methods for reconstructing the one-dimensional velocity distribution function of halo WIMPs as well as for determining their mass and couplings on nucleons by using data from direct Dark Matter detection experiments directly, we combined the simulation programs to a compact system: AMIDAS (A Model-Independent Data Analysis System). For users' convenience an online system has also been established at the same time. AMIDAS has the ability to do full Monte Carlo simulations, faster theoretical estimations, as well as to analyze (real) data sets recorded in direct detection experiments without modifying themore » source code. In this article, I give an overview of functions of the AMIDAS code based on the use of its website.« less

  6. Financial Impact of Direct-Acting Oral Anticoagulants in Medicaid: Budgetary Assessment Based on Number Needed to Treat.

    PubMed

    Fairman, Kathleen A; Davis, Lindsay E; Kruse, Courtney R; Sclar, David A

    2017-04-01

    Faced with rising healthcare costs, state Medicaid programs need short-term, easily calculated budgetary estimates for new drugs, accounting for medical cost offsets due to clinical advantages. To estimate the budgetary impact of direct-acting oral anticoagulants (DOACs) compared with warfarin, an older, lower-cost vitamin K antagonist, on 12-month Medicaid expenditures for nonvalvular atrial fibrillation (NVAF) using number needed to treat (NNT). Medicaid utilization files, 2009 through second quarter 2015, were used to estimate OAC cost accounting for generic/brand statutory minimum (13/23%) and assumed maximum (13/50%) manufacturer rebates. NNTs were calculated from clinical trial reports to estimate avoided medical events for a hypothetical population of 500,000 enrollees (approximate NVAF prevalence × Medicaid enrollment) under two DOAC market share scenarios: 2015 actual and 50% increase. Medical service costs were based on published sources. Costs were inflation-adjusted (2015 US$). From 2009-2015, OAC reimbursement per claim increased by 173 and 279% under maximum and minimum rebate scenarios, respectively, while DOAC market share increased from 0 to 21%. Compared with a warfarin-only counterfactual, counts of ischemic strokes, intracranial hemorrhages, and systemic embolisms declined by 36, 280, and 111, respectively; counts of gastrointestinal hemorrhages increased by 794. Avoided events and reduced monitoring, respectively, offset 3-5% and 15-24% of increased drug cost. Net of offsets, DOAC-related cost increases were US$258-US$464 per patient per year (PPPY) in 2015 and US$309-US$579 PPPY after market share increase. Avoided medical events offset a small portion of DOAC-related drug cost increase. NNT-based calculations provide a transparent source of budgetary-impact information for new medications.

  7. The correlation between the total magnetic flux and the total jet power

    NASA Astrophysics Data System (ADS)

    Nokhrina, Elena E.

    2017-12-01

    Magnetic field threading a black hole ergosphere is believed to play the key role in both driving the powerful relativistic jets observed in active galactic nuclei and extracting the rotational energy from a black hole via Blandford-Znajek process. The magnitude of magnetic field and the magnetic flux in the vicinity of a central black hole is predicted by theoretical models. On the other hand, the magnetic field in a jet can be estimated through measurements of either the core shift effect or the brightness temperature. In both cases the obtained magnetic field is in the radiating domain, so its direct application to the calculation of the magnetic flux needs some theoretical assumptions. In this paper we address the issue of estimating the magnetic flux contained in a jet using the measurements of a core shift effect and of a brightness temperature for the jets, directed almost at the observer. The accurate account for the jet transversal structure allow us to express the magnetic flux through the observed values and an unknown rotation rate of magnetic surfaces. If we assume the sources are in a magnetically arrested disk state, the lower limit for the rotation rate can be obtained. On the other hand, the flux estimate may be tested against the total jet power predicted by the electromagnetic energy extraction model. The resultant expression for power depends logarithmically weakly on an unknown rotation rate. We show that the total jet power estimated through the magnetic flux is in good agreement with the observed power. We also obtain the extremely slow rotation rates, which may be an indication that the majority of the sources considered are not in the magnetically arrested disk state.

  8. Dynamics of the Wulong Landslide Revealed by Broadband Seismic Records

    NASA Astrophysics Data System (ADS)

    Huang, X.; Dan, Y.

    2016-12-01

    Long-period seismic signals are frequently used to trace the dynamic process of large scale landslides. The catastrophic WuLong landslide occurred at 14:51 on 5 June 2009 (Beijing time, UTC+8) in Wulong Prefecture, Southwest China. The topography in landslide area varies dramatically, enhancing the complexity in its movement characteristics. The mass started sliding northward on the upper part of the cliff located upon the west slope of the Tiejianggou gully, and shifted its movement direction to northeastward after being blocked by stable bedrock in front, leaving a scratch zone. The sliding mass then moved downward along the west slope of the gully until it collided with the east slope, and broke up into small pieces after the collision, forming a debris flow along the gully. We use long-period seismic signals extracted from eight broadband seismic stations within 250 km of the landslide to estimate its source time functions. Combining with topographic surveys done before and after the event, we can also resolve kinematic parameters of sliding mass, i.e. velocities, displacements and trajectories, perfectly characterizing its movement features. The runout trajectory deduced from source time functions is consistent with the sliding path, including two direction changing processes, corresponding to scratching the western bedrock and collision with the east slope respectively. Topographic variations can be reflected from estimated velocities. The maximum velocity of the sliding mass reaches 35 m/s before the collision with the east slope of the Tiejianggou gully, resulting from the height difference between the source zone and the deposition zone. What is important is that dynamics of scratching and collision can be characterized by source time functions. Our results confirm that long-period seismic signals are sufficient to characterize dynamics and kinematics of large scale landslides which occur in a region with complex topography.

  9. Challenging the distributed temperature sensing technique for estimating groundwater discharge to streams through controlled artificial point source experiment

    NASA Astrophysics Data System (ADS)

    Lauer, F.; Frede, H.-G.; Breuer, L.

    2012-04-01

    Spatially confined groundwater discharge can contribute significantly to stream discharge. Distributed fibre optic temperature sensing (DTS) of stream water has been successfully used to localize- and quantify groundwater discharge from this type "point sources" (PS) in small first-order streams. During periods when stream and groundwater temperatures differ PS appear as abrupt step in longitudinal stream water temperature distribution. Based on stream temperature observation up- and downstream of a point source and estimated or measured groundwater temperature the proportion of groundwater inflow to stream discharge can be quantified using simple mixing models. However so far this method has not been quantitatively verified, nor has a detailed uncertainty analysis of the method been conducted. The relative accuracy of this method is expected to decrease nonlinear with decreasing proportions of lateral inflow. Furthermore it depends on the temperature differences (ΔT) between groundwater and surface water and on the accuracy of temperature measurement itself. The latter could be affected by different sources of errors. For example it has been shown that a direct impact of solar radiation on fibre optic cables can lead to errors in temperature measurements in small streams due to low water depth. Considerable uncertainty might also be related to the determination of groundwater temperature through direct measurements or derived from the DTS signal. In order to directly validate the method and asses it's uncertainty we performed a set of artificial point source experiments with controlled lateral inflow rates to a natural stream. The experiments were carried out at the Vollnkirchener Bach, a small head water stream in Hessen, Germany in November and December 2011 during a low flow period. A DTS system was installed along a 1.2 km sub reach of the stream. Stream discharge was measured using a gauging flume installed directly upstream of the artificial PS. Lateral inflow was simulated using a pumping system connected to a 2 m3 water tank. Pumping rates were controlled using a magnetic inductive flowmeter and kept constant for a time period of 30 minutes to 1.5 hours depending on the simulated inflow rate. Different temperatures of lateral inflow were adjusted by heating the water in the tank (for summer experiments a cooling by ice cubes could be realized). With this setup, different proportions of lateral inflow to stream flow ranging from 2 to 20%, could be simulated for different ΔT's (2-7° C) between stream- and inflowing water. Results indicate that the estimation of groundwater discharge through DTS is working properly, but that the method is very sensitive to the determination of the PS groundwater temperature. The span of adjusted ΔT and inflow rates of the artificial system are currently used to perform a thorough uncertainty analysis of the DTS method and to derive thresholds for detection limits.

  10. A 10(9) neutrons/pulse transportable pulsed D-D neutron source based on flexible head plasma focus unit.

    PubMed

    Niranjan, Ram; Rout, R K; Srivastava, R; Kaushik, T C; Gupta, Satish C

    2016-03-01

    A 17 kJ transportable plasma focus (PF) device with flexible transmission lines is developed and is characterized. Six custom made capacitors are used for the capacitor bank (CB). The common high voltage plate of the CB is fixed to a centrally triggered spark gap switch. The output of the switch is coupled to the PF head through forty-eight 5 m long RG213 cables. The CB has a quarter time-period of 4 μs and an estimated current of 506 kA is delivered to the PF device at 17 kJ (60 μF, 24 kV) energy. The average neutron yield measured using silver activation detector in the radial direction is (7.1 ± 1.4) × 10(8) neutrons/shot over 4π sr at 5 mbar optimum D2 pressure. The average neutron yield is more in the axial direction with an anisotropy factor of 1.33 ± 0.18. The average neutron energies estimated in the axial as well as in the radial directions are (2.90 ± 0.20) MeV and (2.58 ± 0.20) MeV, respectively. The flexibility of the PF head makes it useful for many applications where the source orientation and the location are important factors. The influence of electromagnetic interferences from the CB as well as from the spark gap on applications area can be avoided by putting a suitable barrier between the bank and the PF head.

  11. Combined analysis of modeled and monitored SO2 concentrations at a complex smelting facility.

    PubMed

    Rehbein, Peter J G; Kennedy, Michael G; Cotsman, David J; Campeau, Madonna A; Greenfield, Monika M; Annett, Melissa A; Lepage, Mike F

    2014-03-01

    Vale Canada Limited owns and operates a large nickel smelting facility located in Sudbury, Ontario. This is a complex facility with many sources of SO2 emissions, including a mix of source types ranging from passive building roof vents to North America's tallest stack. In addition, as this facility performs batch operations, there is significant variability in the emission rates depending on the operations that are occurring. Although SO2 emission rates for many of the sources have been measured by source testing, the reliability of these emission rates has not been tested from a dispersion modeling perspective. This facility is a significant source of SO2 in the local region, making it critical that when modeling the emissions from this facility for regulatory or other purposes, that the resulting concentrations are representative of what would actually be measured or otherwise observed. To assess the accuracy of the modeling, a detailed analysis of modeled and monitored data for SO2 at the facility was performed. A mobile SO2 monitor sampled at five locations downwind of different source groups for different wind directions resulting in a total of 168 hr of valid data that could be used for the modeled to monitored results comparison. The facility was modeled in AERMOD (American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model) using site-specific meteorological data such that the modeled periods coincided with the same times as the monitored events. In addition, great effort was invested into estimating the actual SO2 emission rates that would likely be occurring during each of the monitoring events. SO2 concentrations were modeled for receptors around each monitoring location so that the modeled data could be directly compared with the monitored data. The modeled and monitored concentrations were compared and showed that there were no systematic biases in the modeled concentrations. This paper is a case study of a Combined Analysis of Modelled and Monitored Data (CAMM), which is an approach promulgated within air quality regulations in the Province of Ontario, Canada. Although combining dispersion models and monitoring data to estimate or refine estimates of source emission rates is not a new technique, this study shows how, with a high degree of rigor in the design of the monitoring and filtering of the data, it can be applied to a large industrial facility, with a variety of emission sources. The comparison of modeled and monitored SO2 concentrations in this case study also provides an illustration of the AERMOD model performance for a large industrial complex with many sources, at short time scales in comparison with monitored data. Overall, this analysis demonstrated that the AERMOD model performed well.

  12. New methods for interpretation of magnetic vector and gradient tensor data II: application to the Mount Leyshon anomaly, Queensland, Australia

    NASA Astrophysics Data System (ADS)

    Clark, David A.

    2013-04-01

    Acquisition of magnetic gradient tensor data is anticipated to become routine in the near future. In the meantime, modern ultrahigh resolution conventional magnetic data can be used, with certain important caveats, to calculate magnetic vector components and gradient tensor elements from total magnetic intensity (TMI) or TMI gradient surveys. An accompanying paper presented new methods for inverting gradient tensor data to obtain source parameters for several elementary, but useful, models. These include point dipole (sphere), vertical line of dipoles (narrow vertical pipe), line of dipoles (horizontal cylinder), thin dipping sheet, and contact models. A key simplification is the use of eigenvalues and associated eigenvectors of the tensor. The normalised source strength (NSS), calculated from the eigenvalues, is a particularly useful rotational invariant that peaks directly over 3D compact sources, 2D compact sources, thin sheets, and contacts, independent of magnetisation direction. Source locations can be inverted directly from the NSS and its vector gradient. Some of these new methods have been applied to analysis of the magnetic signature of the Early Permian Mount Leyshon gold-mineralised system, Queensland. The Mount Leyshon magnetic anomaly is a prominent TMI low that is produced by rock units with strong reversed remanence acquired during the Late Palaeozoic Reverse Superchron. The inferred magnetic moment for the source zone of the Mount Leyshon magnetic anomaly is ~1010Am2. Its direction is consistent with petrophysical measurements. Given estimated magnetisation from samples and geological information, this suggests a volume of ~1.5km×1.5km×2km (vertical). The inferred depth of the centre of magnetisation is ~900m below surface, suggesting that the depth extent of the magnetic zone is ~1800m. Some of the deeper, undrilled portion of the magnetic zone could be a mafic intrusion similar to the nearby coeval Fenian Diorite, representing part of the parent magma chamber beneath the Mount Leyshon Intrusive Complex.

  13. Cosmic curvature tested directly from observations

    NASA Astrophysics Data System (ADS)

    Denissenya, Mikhail; Linder, Eric V.; Shafieloo, Arman

    2018-03-01

    Cosmic spatial curvature is a fundamental geometric quantity of the Universe. We investigate a model independent, geometric approach to measure spatial curvature directly from observations, without any derivatives of data. This employs strong lensing time delays and supernova distance measurements to measure the curvature itself, rather than just testing consistency with flatness. We define two curvature estimators, with differing error propagation characteristics, that can crosscheck each other, and also show how they can be used to map the curvature in redshift slices, to test constancy of curvature as required by the Robertson-Walker metric. Simulating realizations of redshift distributions and distance measurements of lenses and sources, we estimate uncertainties on the curvature enabled by next generation measurements. The results indicate that the model independent methods, using only geometry without assuming forms for the energy density constituents, can determine the curvature at the ~6×10‑3 level.

  14. The CASTLES Imaging Survey of Gravitational Lenses

    NASA Astrophysics Data System (ADS)

    Peng, C. Y.; Falco, E. E.; Lehar, J.; Impey, C. D.; Kochanek, C. S.; McLeod, B. A.; Rix, H.-W.

    1997-12-01

    The CASTLES survey (Cfa-Arizona-(H)ST-Lens-Survey) is imaging most known small-separation gravitational lenses (or lens candidates), using the NICMOS camera (mostly H-band) and the WFPC2 (V and I band) on HST. To date nearly half of the IR imaging survey has been completed. The main goals are: (1) to search for lens galaxies where none have been directly detected so far; (2) obtain photometric redshift estimates (VIH) for the lenses where no spectroscopic redshifts exist; (3) study and model the lens galaxies in detail, in part to study the mass distribution within them, in part to identify ``simple" systems that may permit accurate time delay estimates for H_0; (3) measure the M/L evolution of the sample of lens galaxies with look-back time (to z ~ 1); (4) determine directly which fraction of sources are lensed by ellipticals vs. spirals. We will present the survey specifications and the images obtained so far.

  15. Neural activity in the posterior superior temporal region during eye contact perception correlates with autistic traits.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Endo, Taro; Someya, Toshiyuki

    2013-08-09

    The present study investigated the relationship between neural activity associated with gaze processing and autistic traits in typically developed subjects using magnetoencephalography. Autistic traits in 24 typically developed college students with normal intelligence were assessed using the Autism Spectrum Quotient (AQ). The Minimum Current Estimates method was applied to estimate the cortical sources of magnetic responses to gaze stimuli. These stimuli consisted of apparent motion of the eyes, displaying direct or averted gaze motion. Results revealed gaze-related brain activations in the 150-250 ms time window in the right posterior superior temporal sulcus (pSTS), and in the 150-450 ms time window in medial prefrontal regions. In addition, the mean amplitude in the 150-250 ms time window in the right pSTS region was modulated by gaze direction, and its activity in response to direct gaze stimuli correlated with AQ score. pSTS activation in response to direct gaze is thought to be related to higher-order social processes. Thus, these results suggest that brain activity linking eye contact and social signals is associated with autistic traits in a typical population. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Broadband implementation of coprime linear microphone arrays for direction of arrival estimation.

    PubMed

    Bush, Dane; Xiang, Ning

    2015-07-01

    Coprime arrays represent a form of sparse sensing which can achieve narrow beams using relatively few elements, exceeding the spatial Nyquist sampling limit. The purpose of this paper is to expand on and experimentally validate coprime array theory in an acoustic implementation. Two nested sparse uniform linear subarrays with coprime number of elements ( M and N) each produce grating lobes that overlap with one another completely in just one direction. When the subarray outputs are combined it is possible to retain the shared beam while mostly canceling the other superfluous grating lobes. In this way a small number of microphones ( N+M-1) creates a narrow beam at higher frequencies, comparable to a densely populated uniform linear array of MN microphones. In this work beampatterns are simulated for a range of single frequencies, as well as bands of frequencies. Narrowband experimental beampatterns are shown to correspond with simulated results even at frequencies other than the arrays design frequency. Narrowband side lobe locations are shown to correspond to the theoretical values. Side lobes in the directional pattern are mitigated by increasing bandwidth of analyzed signals. Direction of arrival estimation is also implemented for two simultaneous noise sources in a free field condition.

  17. Estimative of conversion fractions of AGN magnetic luminosity to produce ultra high energy cosmic rays from the observation of Fermi-LAT gamma rays

    NASA Astrophysics Data System (ADS)

    Coimbra-Araújo, Carlos H.; Anjos, Rita C.

    2017-01-01

    A fraction of the magnetic luminosity (LB) produced by Kerr black holes in some active galactic nuclei (AGNs) can produce the necessary energy to accelerate ultra high energy cosmic rays (UHECRs) beyond the GZK limit, observed, e.g., by the Pierre Auger experiment. Nevertheless, the direct detection of those UHECRs has a lack of information about the direction of the source from where those cosmic rays are coming, since charged particles are deflected by the intergalactic magnetic field. This problem arises the needing of alternative methods to evaluate the luminosity of UHECRs (LCR) from a given source. Methods proposed in literature range from the observation of upper limits in gamma rays to the observation of upper limits in neutrinos produced by cascade effects during the propagation of UHECRs. In this aspect, the present work proposes a method to calculate limits of the main possible conversion fractions ηCR = LCR/LB for nine UHECR AGN Seyfert sources based on the respective observation of gamma ray upper limits from Fermi-LAT data.

  18. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  19. Climate Forcing by Particles from Specific Sources, With Implications for No-regrets Scenarios

    NASA Astrophysics Data System (ADS)

    Bond, T. C.; Roden, C. A.; Subramanian, R.; Rasch, P. J.

    2006-12-01

    Mitigation-- the act of reducing human effects on climate and atmosphere by changing practices-- occurs one source at a time, one country at a time. Examining climate forcing produced by individual sources could be instructive. Two sectors contribute the largest fraction of black carbon aerosols from energy-related combustion: diesel engines and residential biofuel. We examine direct climate forcing by aerosols from these sources in four locations. Because source characterization is lacking, global emission inventories that include chemical composition of particles have often relied on expert judgment. We are gaining information on emission rates and climate- relevant properties through partnerships with projects related to air quality and health in Thailand and Honduras. Despite the presence of organic carbon, black carbon's constant companion, particles from both diesel and biofuel exert net climate warming. In particular, solid-fuel combustion produces material with weak light absorption and strong absorption spectral dependence. We discuss the expected emissions and properties of this material. Revised emission rates and properties are implemented in the Community Atmosphere Model, housed at the National Center for Atmospheric Research, and we tag particles emitted from individual sources. Which sources feed high-forcing regions, such as the area above the low-cloud deck in the North Pacific? Which particles might have been scavenged, and how does uncertainty in removal rates affect single-source forcing? Using model experiments, we estimate central values and uncertainties of direct radiative forcing from each source. Finally, we discuss the potential for reducing climate forcing by mitigating these individual sources. What is the range of benefits expected by addressing these sources, and what are the costs and obstacles? Only by representing uncertainty can we determine the likelihood that reducing these emissions represents a "no- regret" scenario for climate.

  20. 2012 Anthropometric Survey of U.S. Army Personnel: Methods and Summary Statistics

    DTIC Science & Technology

    2014-12-05

    s to complete. The software for participant scanning, CyScan for the whole-body and head scanners and INFOOT for the foot scanner...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and ...design and engineering needs, as well as those anticipated well into the future . Ninety-four directly measured dimensions, 39 derived

  1. Active heat pulse sensing of 3-D-flow fields in streambeds

    NASA Astrophysics Data System (ADS)

    Banks, Eddie W.; Shanafield, Margaret A.; Noorduijn, Saskia; McCallum, James; Lewandowski, Jörg; Batelaan, Okke

    2018-03-01

    Profiles of temperature time series are commonly used to determine hyporheic flow patterns and hydraulic dynamics in the streambed sediments. Although hyporheic flows are 3-D, past research has focused on determining the magnitude of the vertical flow component and how this varies spatially. This study used a portable 56-sensor, 3-D temperature array with three heat pulse sources to measure the flow direction and magnitude up to 200 mm below the water-sediment interface. Short, 1 min heat pulses were injected at one of the three heat sources and the temperature response was monitored over a period of 30 min. Breakthrough curves from each of the sensors were analysed using a heat transport equation. Parameter estimation and uncertainty analysis was undertaken using the differential evolution adaptive metropolis (DREAM) algorithm, an adaption of the Markov chain Monte Carlo method, to estimate the flux and its orientation. Measurements were conducted in the field and in a sand tank under an extensive range of controlled hydraulic conditions to validate the method. The use of short-duration heat pulses provided a rapid, accurate assessment technique for determining dynamic and multi-directional flow patterns in the hyporheic zone and is a basis for improved understanding of biogeochemical processes at the water-streambed interface.

  2. Large Crater Clustering tool

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Skinner, James A.; Hunter, Marc A.

    2017-08-01

    In this paper we present the Large Crater Clustering (LCC) tool set, an ArcGIS plugin that supports the quantitative approximation of a primary impact location from user-identified locations of possible secondary impact craters or the long-axes of clustered secondary craters. The identification of primary impact craters directly supports planetary geologic mapping and topical science studies where the chronostratigraphic age of some geologic units may be known, but more distant features have questionable geologic ages. Previous works (e.g., McEwen et al., 2005; Dundas and McEwen, 2007) have shown that the source of secondary impact craters can be estimated from secondary impact craters. This work adapts those methods into a statistically robust tool set. We describe the four individual tools within the LCC tool set to support: (1) processing individually digitized point observations (craters), (2) estimating the directional distribution of a clustered set of craters, back projecting the potential flight paths (crater clusters or linearly approximated catenae or lineaments), (3) intersecting projected paths, and (4) intersecting back-projected trajectories to approximate the local of potential source primary craters. We present two case studies using secondary impact features mapped in two regions of Mars. We demonstrate that the tool is able to quantitatively identify primary impacts and supports the improved qualitative interpretation of potential secondary crater flight trajectories.

  3. Bayesian and “Anti-Bayesian” Biases in Sensory Integration for Action and Perception in the Size–Weight Illusion

    PubMed Central

    Brayanov, Jordan B.

    2010-01-01

    Which is heavier: a pound of lead or a pound of feathers? This classic trick question belies a simple but surprising truth: when lifted, the pound of lead feels heavier—a phenomenon known as the size–weight illusion. To estimate the weight of an object, our CNS combines two imperfect sources of information: a prior expectation, based on the object's appearance, and direct sensory information from lifting it. Bayes' theorem (or Bayes' law) defines the statistically optimal way to combine multiple information sources for maximally accurate estimation. Here we asked whether the mechanisms for combining these information sources produce statistically optimal weight estimates for both perceptions and actions. We first studied the ability of subjects to hold one hand steady when the other removed an object from it, under conditions in which sensory information about the object's weight sometimes conflicted with prior expectations based on its size. Since the ability to steady the supporting hand depends on the generation of a motor command that accounts for lift timing and object weight, hand motion can be used to gauge biases in weight estimation by the motor system. We found that these motor system weight estimates reflected the integration of prior expectations with real-time proprioceptive information in a Bayesian, statistically optimal fashion that discounted unexpected sensory information. This produces a motor size–weight illusion that consistently biases weight estimates toward prior expectations. In contrast, when subjects compared the weights of two objects, their perceptions defied Bayes' law, exaggerating the value of unexpected sensory information. This produces a perceptual size–weight illusion that biases weight perceptions away from prior expectations. We term this effect “anti-Bayesian” because the bias is opposite that seen in Bayesian integration. Our findings suggest that two fundamentally different strategies for the integration of prior expectations with sensory information coexist in the nervous system for weight estimation. PMID:20089821

  4. Genetic parameters for cattle price and body weight from routinely collected data at livestock auctions and commercial farms.

    PubMed

    Mc Hugh, N; Evans, R D; Amer, P R; Fahey, A G; Berry, D P

    2011-01-01

    Beef outputs from dairy farms make an important contribution to overall profitability in Irish dairy herds and are the sole source of revenue in many beef herds. The aim of this study was to estimate genetic parameters for animal BW and price across different stages of maturity. Data originated from 2 main sources: price and BW from livestock auctions and BW from on-farm weighings between 2000 and 2008. The data were divided into 4 distinct maturity categories: calves (n = 24,513), weanlings (n = 27,877), postweanlings (n = 23,279), and cows (n = 4,894). A univariate animal model used to estimate variance components was progressively built up to include a maternal genetic effect and a permanent environmental maternal effect. Bivariate analyses were used to estimate genetic covariances between BW and price per animal within and across maturity category. Direct heritability estimates for price per animal were 0.34 ± 0.03, 0.31 ± 0.05, 0.19 ± 0.04, and 0.10 ± 0.04 for calves, weanling, postweanlings, and cows, respectively. Direct heritability estimates for BW were 0.26 ± 0.03 for weanlings, 0.25 ± 0.04 for postweanlings, and 0.24 ± 0.06 for cows; no BW data were available on calves. Significant maternal genetic and maternal permanent environmental effects were observed for weanling BW only. The genetic correlation between price per animal and BW within each maturity group varied from 0.55 ± 0.06 (postweanling price and BW) to 0.91 ± 0.04 (cow price and BW). The availability of routinely collected data, along with the existence of ample genetic variation for animal BW and price per animal, facilitates their inclusion in Irish dairy and beef breeding objectives to better reflect the profitability of both enterprises.

  5. Angular correlation of cosmic neutrinos with ultrahigh-energy cosmic rays and implications for their sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moharana, Reetanjali; Razzaque, Soebur, E-mail: reetanjalim@uj.ac.za, E-mail: srazzaque@uj.ac.za

    2015-08-01

    Cosmic neutrino events detected by the IceCube Neutrino Observatory with energy 0∼> 3 TeV have poor angular resolutions to reveal their origin. Ultrahigh-energy cosmic rays (UHECRs), with better angular resolutions at 0>6 EeV energies, can be used to check if the same astrophysical sources are responsible for producing both neutrinos and UHECRs. We test this hypothesis, with statistical methods which emphasize invariant quantities, by using data from the Pierre Auger Observatory, Telescope Array and past cosmic-ray experiments. We find that the arrival directions of the cosmic neutrinos are correlated with 0≥ 10 EeV UHECR arrival directions at confidence level ≈ 90%. The strengthmore » of the correlation decreases with decreasing UHECR energy and no correlation exists at energy 0∼ 6 EeV . A search in astrophysical databases within 3{sup o} of the arrival directions of UHECRs with energy 0≥ 10 EeV, that are correlated with the IceCube cosmic neutrinos, resulted in 18 sources from the Swift-BAT X-ray catalog with redshift z≤ 0.06. We also found 3 objects in the Kühr catalog of radio sources using the same criteria. The sources are dominantly Seyfert galaxies with Cygnus A being the most prominent member. We calculate the required neutrino and UHECR fluxes to produce the observed correlated events, and estimate the corresponding neutrino luminosity (25 TeV–2.2 PeV) and cosmic-ray luminosity (500 TeV–180 EeV), assuming the sources are the ones we found in the Swift-BAT and Kühr catalogs. We compare these luminosities with the X-ray luminosity of the corresponding sources and discuss possibilities of accelerating protons to 0∼> 10 EeV and produce neutrinos in these sources.« less

  6. Joint seismic-infrasonic processing of recordings from a repeating source of atmospheric explosions.

    PubMed

    Gibbons, Steven J; Ringdal, Frode; Kvaerna, Tormod

    2007-11-01

    A database has been established of seismic and infrasonic recordings from more than 100 well-constrained surface explosions, conducted by the Finnish military to destroy old ammunition. The recorded seismic signals are essentially identical and indicate that the variation in source location and magnitude is negligible. In contrast, the infrasonic arrivals on both seismic and infrasound sensors exhibit significant variation both with regard to the number of detected phases, phase travel times, and phase amplitudes, which would be attributable to atmospheric factors. This data set provides an excellent database for studies in sound propagation, infrasound array detection, and direction estimation.

  7. Electricity tommorrow

    NASA Astrophysics Data System (ADS)

    1981-01-01

    The critical issues for the electricity sector in California were presented. Adopted level of electricity demand and adopted policies and supply criteria are included. These form the basis for planning and certification of electric generation and transmission facilities by the energy commission. Estimates of the potential contributions of conservation and various conventional and alternative supply sources, critiques of utility supply plans, and determinations of how much new capacity is required are also included. Policy recommendations for directing public and private investments into preferred energy options, for spreading the benefits and costs of these options broadly and fairly among California's citizens, and for removing remaining obstacles to the development of all acceptable energy sources are presented.

  8. Uniform and nonuniform V-shaped planar arrays for 2-D direction-of-arrival estimation

    NASA Astrophysics Data System (ADS)

    Filik, T.; Tuncer, T. E.

    2009-10-01

    In this paper, isotropic and directional uniform and nonuniform V-shaped arrays are considered for azimuth and elevation direction-of-arrival (DOA) angle estimation simultaneously. It is shown that the uniform isotropic V-shaped arrays (UI V arrays) have no angle coupling between the azimuth and elevation DOA. The design of the UI V arrays is investigated, and closed form expressions are presented for the parameters of the UI V arrays and nonuniform V arrays. These expressions allow one to find the isotropic V angle for different array types. The DOA performance of the UI V array is compared with the uniform circular array (UCA) for correlated signals and in case of mutual coupling between array elements. The modeling error for the sensor positions is also investigated. It is shown that V array and circular array have similar robustness for the position errors while the performance of UI V array is better than the UCA for correlated source signals and when there is mutual coupling. Nonuniform V-shaped isotropic arrays are investigated which allow good DOA performance with limited number of sensors. Furthermore, a new design method for the directional V-shaped arrays is proposed. This method is based on the Cramer-Rao Bound for joint estimation where the angle coupling effect between the azimuth and elevation DOA angles is taken into account. The design method finds an optimum angle between the linear subarrays of the V array. The proposed method can be used to obtain directional arrays with significantly better DOA performance.

  9. Precision in the perception of direction of a moving pattern

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.

    1988-01-01

    The precision of the model of pattern motion analysis put forth by Adelson and Movshon (1982) who proposed that humans determine the direction of a moving plaid (the sum of two sinusoidal gratings of different orientations) in two steps is qualitatively examined. The volocities of the grating components are first estimated, then combined using the intersection of constraints to determine the velocity of the plaid as a whole. Under the additional assumption that the noise sources for the component velocities are independent, an approximate expression can be derived for the precision in plaid direction as a function of the precision in the speed and direction of the components. Monte Carlo simulations verify that the expression is valid to within 5 percent over the natural range of the parameters. The expression is then used to predict human performance based on available estimates of human precision in the judgment of single component speed. Human performance is predicted to deteriorate by a factor of 3 as half the angle between the wavefronts (theta) decreases from 60 to 30 deg, but actual performance does not. The mean direction discrimination for three human observers was 4.3 plus or minus 0.9 deg (SD) for theta = 60 deg and 5.9 plus or minus 1.2 for theta = 30 deg. This discrepancy can be resolved in two ways. If the noises in the internal representations of the component speeds are smaller than the available estimates or if these noises are not independent, then the psychophysical results are consistent with the Adelson-Movshon hypothesis.

  10. Real time estimation of generation, extinction and flow of muscle fibre action potentials in high density surface EMG.

    PubMed

    Mesin, Luca

    2015-02-01

    Developing a real time method to estimate generation, extinction and propagation of muscle fibre action potentials from bi-dimensional and high density surface electromyogram (EMG). A multi-frame generalization of an optical flow technique including a source term is considered. A model describing generation, extinction and propagation of action potentials is fit to epochs of surface EMG. The algorithm is tested on simulations of high density surface EMG (inter-electrode distance equal to 5mm) from finite length fibres generated using a multi-layer volume conductor model. The flow and source term estimated from interference EMG reflect the anatomy of the muscle, i.e. the direction of the fibres (2° of average estimation error) and the positions of innervation zone and tendons under the electrode grid (mean errors of about 1 and 2mm, respectively). The global conduction velocity of the action potentials from motor units under the detection system is also obtained from the estimated flow. The processing time is about 1 ms per channel for an epoch of EMG of duration 150 ms. A new real time image processing algorithm is proposed to investigate muscle anatomy and activity. Potential applications are proposed in prosthesis control, automatic detection of optimal channels for EMG index extraction and biofeedback. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. COSMIC MICROWAVE BACKGROUND POLARIZATION AND TEMPERATURE POWER SPECTRA ESTIMATION USING LINEAR COMBINATION OF WMAP 5 YEAR MAPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, Pramoda Kumar; Jain, Pankaj; Saha, Rajib

    We estimate cosmic microwave background (CMB) polarization and temperature power spectra using Wilkinson Microwave Anisotropy Probe (WMAP) 5 year foreground contaminated maps. The power spectrum is estimated by using a model-independent method, which does not utilize directly the diffuse foreground templates nor the detector noise model. The method essentially consists of two steps: (1) removal of diffuse foregrounds contamination by making linear combination of individual maps in harmonic space and (2) cross-correlation of foreground cleaned maps to minimize detector noise bias. For the temperature power spectrum we also estimate and subtract residual unresolved point source contamination in the cross-power spectrummore » using the point source model provided by the WMAP science team. Our TT, TE, and EE power spectra are in good agreement with the published results of the WMAP science team. We perform detailed numerical simulations to test for bias in our procedure. We find that the bias is small in almost all cases. A negative bias at low l in TT power spectrum has been pointed out in an earlier publication. We find that the bias-corrected quadrupole power (l(l + 1)C{sub l} /2{pi}) is 532 {mu}K{sup 2}, approximately 2.5 times the estimate (213.4 {mu}K{sup 2}) made by the WMAP team.« less

  12. Information spreading by a combination of MEG source estimation and multivariate pattern classification.

    PubMed

    Sato, Masashi; Yamashita, Okito; Sato, Masa-Aki; Miyawaki, Yoichi

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of "information spreading" may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined.

  13. Information spreading by a combination of MEG source estimation and multivariate pattern classification

    PubMed Central

    Sato, Masashi; Yamashita, Okito; Sato, Masa-aki

    2018-01-01

    To understand information representation in human brain activity, it is important to investigate its fine spatial patterns at high temporal resolution. One possible approach is to use source estimation of magnetoencephalography (MEG) signals. Previous studies have mainly quantified accuracy of this technique according to positional deviations and dispersion of estimated sources, but it remains unclear how accurately MEG source estimation restores information content represented by spatial patterns of brain activity. In this study, using simulated MEG signals representing artificial experimental conditions, we performed MEG source estimation and multivariate pattern analysis to examine whether MEG source estimation can restore information content represented by patterns of cortical current in source brain areas. Classification analysis revealed that the corresponding artificial experimental conditions were predicted accurately from patterns of cortical current estimated in the source brain areas. However, accurate predictions were also possible from brain areas whose original sources were not defined. Searchlight decoding further revealed that this unexpected prediction was possible across wide brain areas beyond the original source locations, indicating that information contained in the original sources can spread through MEG source estimation. This phenomenon of “information spreading” may easily lead to false-positive interpretations when MEG source estimation and classification analysis are combined to identify brain areas that represent target information. Real MEG data analyses also showed that presented stimuli were able to be predicted in the higher visual cortex at the same latency as in the primary visual cortex, also suggesting that information spreading took place. These results indicate that careful inspection is necessary to avoid false-positive interpretations when MEG source estimation and multivariate pattern analysis are combined. PMID:29912968

  14. Combining Radiography and Passive Measurements for Radiological Threat Localization in Cargo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; White, Timothy A.; Jarman, Kenneth D.

    Detecting shielded special nuclear material (SNM) in a cargo container is a difficult problem, since shielding reduces the amount of radiation escaping the container. Radiography provides information that is complementary to that provided by passive gamma-ray detection systems: while not directly sensitive to radiological materials, radiography can reveal highly shielded regions that may mask a passive radiological signal. Combining these measurements has the potential to improve SNM detection, either through improved sensitivity or by providing a solution to the inverse problem to estimate source properties (strength and location). We present a data-fusion method that uses a radiograph to provide anmore » estimate of the radiation-transport environment for gamma rays from potential sources. This approach makes quantitative use of radiographic images without relying on image interpretation, and results in a probabilistic description of likely source locations and strengths. We present results for this method for a modeled test case of a cargo container passing through a plastic-scintillator-based radiation portal monitor and a transmission-radiography system. We find that a radiograph-based inversion scheme allows for localization of a low-noise source placed randomly within the test container to within 40 cm, compared to 70 cm for triangulation alone, while strength estimation accuracy is improved by a factor of six. Improvements are seen in regions of both high and low shielding, but are most pronounced in highly shielded regions. The approach proposed here combines transmission and emission data in a manner that has not been explored in the cargo-screening literature, advancing the ability to accurately describe a hidden source based on currently-available instrumentation.« less

  15. Are changing emission patterns across the Northern Hemisphere influencing long-range transport contributions to background air pollution?

    NASA Astrophysics Data System (ADS)

    Mathur, R.; Kang, D.; Napelenok, S. L.; Xing, J.; Hogrefe, C.

    2017-12-01

    Air pollution reduction strategies for a region are complicated not only by the interplay of local emissions sources and several complex physical, chemical, dynamical processes in the atmosphere, but also hemispheric background levels of pollutants. Contrasting changes in emission patterns across the globe (e.g. declining emissions in North America and Western Europe in response to implementation of control measures and increasing emissions across Asia due to economic and population growth) are resulting in heterogeneous changes in the tropospheric chemical composition and are likely altering long-range transport impacts and consequently background pollution levels at receptor regions. To quantify these impacts, the WRF-CMAQ model is expanded to hemispheric scales and multi-decadal model simulations are performed for the period spanning 1990-2010 to examine changes in hemispheric air pollution resulting from changes in emissions over this period. Simulated trends in ozone and precursor species concentrations across the U.S. and the Northern Hemisphere over the past two decades are compared with those inferred from available measurements during this period. Additionally, the decoupled direct method (DDM) in CMAQ, a first- and higher-order sensitivity calculation technique, is used to estimate the sensitivity of O3 to emissions from different source regions across the Northern Hemisphere. The seasonal variations in source region contributions to background O3 are then estimated from these sensitivity calculations and will be discussed. These source region sensitivities estimated from DDM are then combined with the multi-decadal simulations of O3 distributions and emissions trends to characterize the changing contributions of different source regions to background O3 levels across North America. This characterization of changing long-range transport contributions is critical for the design and implementation of tighter national air quality standards

  16. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part I: Velocity and Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Nironi, Chiara; Salizzoni, Pietro; Marro, Massimo; Mejean, Patrick; Grosjean, Nathalie; Soulhac, Lionel

    2015-09-01

    The prediction of the probability density function (PDF) of a pollutant concentration within atmospheric flows is of primary importance in estimating the hazard related to accidental releases of toxic or flammable substances and their effects on human health. This need motivates studies devoted to the characterization of concentration statistics of pollutants dispersion in the lower atmosphere, and their dependence on the parameters controlling their emissions. As is known from previous experimental results, concentration fluctuations are significantly influenced by the diameter of the source and its elevation. In this study, we aim to further investigate the dependence of the dispersion process on the source configuration, including source size, elevation and emission velocity. To that end we study experimentally the influence of these parameters on the statistics of the concentration of a passive scalar, measured at several distances downwind of the source. We analyze the spatial distribution of the first four moments of the concentration PDFs, with a focus on the variance, its dissipation and production and its spectral density. The information provided by the dataset, completed by estimates of the intermittency factors, allow us to discuss the role of the main mechanisms controlling the scalar dispersion and their link to the form of the PDF. The latter is shown to be very well approximated by a Gamma distribution, irrespective of the emission conditions and the distance from the source. Concentration measurements are complemented by a detailed description of the velocity statistics, including direct estimates of the Eulerian integral length scales from two-point correlations, a measurement that has been rarely presented to date.

  17. The effects of training on errors of perceived direction in perspective displays

    NASA Technical Reports Server (NTRS)

    Tharp, Gregory K.; Ellis, Stephen R.

    1990-01-01

    An experiment was conducted to determine the effects of training on the characteristic direction errors that are observed when subjects estimate exocentric directions on perspective displays. Changes in five subjects' perceptual errors were measured during a training procedure designed to eliminate the error. The training was provided by displaying to each subject both the sign and the direction of his judgment error. The feedback provided by the error display was found to decrease but not eliminate the error. A lookup table model of the source of the error was developed in which the judgement errors were attributed to overestimates of both the pitch and the yaw of the viewing direction used to produce the perspective projection. The model predicts the quantitative characteristics of the data somewhat better than previous models did. A mechanism is proposed for the observed learning, and further tests of the model are suggested.

  18. Transboundary Air-Pollution Transport in the Czech-Polish Border Region between the Cities of Ostrava and Katowice.

    PubMed

    Černikovský, Libor; Krejčí, Blanka; Blažek, Zdeněk; Volná, Vladimíra

    2016-12-01

    The Czech Hydrometeorological Institute (CHMI) estimated the transboundary transport of air pollution between the Czech Republic and Poland by assessing relationships between weather conditions and air pollution in the area as part of the "Air Quality Information System in the Polish-Czech border of the Silesian and Moravian-Silesian region" project (http://www.air-silesia.eu). Estimation of cross-border transport of pollutants is important for Czech-Polish negotiations and targeted measures for improving air quality. Direct measurement of PM 10 and sulphur dioxide (SO 2 ) concentrations and the direction and wind speed from measuring stations in the vicinity of the Czech-Polish state border in 2006-2012. Taking into account all the inaccuracies, simplifications and uncertainties, by which all of the measurements are affected, it is possible to state that the PM 10 transboundary transport was greater from the direction of Poland to the Czech Republic, rather than the other way around. Nevertheless, the highest share of the overall PM 10 concentration load was recorded on days with a vaguely estimated airflow direction. This usually included days with changing wind direction or days with a distinct wind change throughout the given day. A changeable wind is most common during low wind speeds. It can be assumed that during such days with an ambiguous daily airflow, the polluted air saturated with sources on both sides of the border moves from one country to the other. Therefore, we could roughly ascribe an equal level of these concentrations to both the Czech and Polish side. PM 10 transboundary transport was higher from Poland to the Czech Republic than from the opposite direction, despite the predominant air flow from the Czech Republic to Poland. Copyright© by the National Institute of Public Health, Prague 2016

  19. Algorithms and uncertainties for the determination of multispectral irradiance components and aerosol optical depth from a shipborne rotating shadowband radiometer

    NASA Astrophysics Data System (ADS)

    Witthuhn, Jonas; Deneke, Hartwig; Macke, Andreas; Bernhard, Germar

    2017-03-01

    The 19-channel rotating shadowband radiometer GUVis-3511 built by Biospherical Instruments provides automated shipborne measurements of the direct, diffuse and global spectral irradiance components without a requirement for platform stabilization. Several direct sun products, including spectral direct beam transmittance, aerosol optical depth, Ångström exponent and precipitable water, can be derived from these observations. The individual steps of the data analysis are described, and the different sources of uncertainty are discussed. The total uncertainty of the observed direct beam transmittances is estimated to be about 4 % for most channels within a 95 % confidence interval for shipborne operation. The calibration is identified as the dominating contribution to the total uncertainty. A comparison of direct beam transmittance with those obtained from a Cimel sunphotometer at a land site and a manually operated Microtops II sunphotometer on a ship is presented. Measurements deviate by less than 3 and 4 % on land and on ship, respectively, for most channels and in agreement with our previous uncertainty estimate. These numbers demonstrate that the instrument is well suited for shipborne operation, and the applied methods for motion correction work accurately. Based on spectral direct beam transmittance, aerosol optical depth can be retrieved with an uncertainty of 0.02 for all channels within a 95 % confidence interval. The different methods to account for Rayleigh scattering and gas absorption in our scheme and in the Aerosol Robotic Network processing for Cimel sunphotometers lead to minor deviations. Relying on the cross calibration of the 940 nm water vapor channel with the Cimel sunphotometer, the column amount of precipitable water can be estimated with an uncertainty of ±0.034 cm.

  20. Application of empirical and dynamical closure methods to simple climate models

    NASA Astrophysics Data System (ADS)

    Padilla, Lauren Elizabeth

    This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.

  1. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  2. The collective emission of electromagnetic waves from astrophysical jets - Luminosity gaps, BL Lacertae objects, and efficient energy transport

    NASA Technical Reports Server (NTRS)

    Baker, D. N.; Borovsky, Joseph E.; Benford, Gregory; Eilek, Jean A.

    1988-01-01

    A model of the inner portions of astrophysical jets is constructed in which a relativistic electron beam is injected from the central engine into the jet plasma. This beam drives electrostatic plasma wave turbulence, which leads to the collective emission of electromagnetic waves. The emitted waves are beamed in the direction of the jet axis, so that end-on viewing of the jet yields an extremely bright source (BL Lacertae object). The relativistic electron beam may also drive long-wavelength electromagnetic plasma instabilities (firehose and Kelvin-Helmholtz) that jumble the jet magnetic field lines. After a sufficient distance from the core source, these instabilities will cause the beamed emission to point in random directions and the jet emission can then be observed from any direction relative to the jet axis. This combination of effects may lead to the gap turn-on of astrophysical jets. The collective emission model leads to different estimates for energy transport and the interpretation of radio spectra than the conventional incoherent synchrotron theory.

  3. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  4. Comparison of clast and matrix dispersal in till: Charlo-Atholville area, north-central New Brunswick

    USGS Publications Warehouse

    Dickson, M.L.; Broster, B.E.; Parkhill, M.A.

    2004-01-01

    Striations and dispersal patterns for till clasts and matrix geochemistry are used to define flow directions of glacial transport across an area of about 800km2 in the Charlo-Atholville area of north-central New Brunswick. A total of 170 clast samples and 328 till matrix samples collected for geochemical analysis across the region, were analyzed for a total of 39 elements. Major lithologic contacts used here to delineate till clast provenance were based on recent bedrock mapping. Eleven known mineral occurrences and a gossan are used to define point source targets for matrix geochemical dispersal trains and to estimate probable distance and direction of transport from unknown sources. Clast trains are traceable for distances of approximately 10 km, whereas till geochemical dispersal patterns are commonly lost within 5 km of transport. Most dispersal patterns reflect more than a single direction of glacial transport. These data indicate that a single till sheet, 1-4 m thick, was deposited as the dominant ice-flow direction fluctuated between southeastward, eastward, and northward over the study area. Directions of early flow represent changes in ice sheet dominance, first from the northwest and then from the west. Locally, eastward and northward flow represent the maximum erosive phases. The last directions of flow are likely due to late glacial ice sheet drawdown towards the valley outlet at Baie des Chaleurs.

  5. Studying emissions of CO2 in the Baltimore/Washington area using airborne measurements: source attribution, flux quantification, and model comparison

    NASA Astrophysics Data System (ADS)

    Ahn, D.; Hansford, J. R.; Salawitch, R. J.; Ren, X.; Cohen, M.; Karion, A.; Whetstone, J. R.; Salmon, O. E.; Shepson, P. B.; Gurney, K. R.; Osterman, G. B.; Dickerson, R. R.

    2017-12-01

    We study emissions of CO2 in the Baltimore-Washington area using airborne in-situ measurements, obtained during the February 2015 Fluxes of Greenhouse Gases in Maryland (FLAGG-MD) campaign. In this study, we attributed enhanced signals of CO2 to several power plants and two urban areas (Baltimore City and Washington, DC), using the NOAA HYSPLIT air parcel trajectory model as well as the analysis of chemical ratios to quantify the source/receptor relationship. Then, the fluxes of attributed CO2 are estimated using a mass balance approach. The uncertainty in the aircraft-based mass balance approach is estimated by conducting a detailed sensitivity analysis of CO2 fluxes, considering factors such as the background mixing ratio of CO2, wind direction and speed, PBL heights, the horizontal boundary, and vertical interpolation methods. Estimated fluxes of CO2 with estimated uncertainty ranges are then compared to output from various emissions data and models, such as CEMS, CarbonTracker, FFDAS, and ODIAC. Finally, column CO2 data over the Baltimore-Washington region observed by the OCO-2 satellite instrument are statistically compared to aircraft in-situ observations, to assess how well OCO-2 is able to quantify geographic and synoptic-scale variability.

  6. Assessing the Gap Between Top-down and Bottom-up Measured Methane Emissions in Indianapolis, IN.

    NASA Astrophysics Data System (ADS)

    Prasad, K.; Lamb, B. K.; Cambaliza, M. O. L.; Shepson, P. B.; Stirm, B. H.; Salmon, O. E.; Lavoie, T. N.; Lauvaux, T.; Ferrara, T.; Howard, T.; Edburg, S. L.; Whetstone, J. R.

    2014-12-01

    Releases of methane (CH4) from the natural gas supply chain in the United States account for approximately 30% of the total US CH4 emissions. However, there continues to be large questions regarding the accuracy of current emission inventories for methane emissions from natural gas usage. In this paper, we describe results from top-down and bottom-up measurements of methane emissions from the large isolated city of Indianapolis. The top-down results are based on aircraft mass balance and tower based inverse modeling methods, while the bottom-up results are based on direct component sampling at metering and regulating stations, surface enclosure measurements of surveyed pipeline leaks, and tracer/modeling methods for other urban sources. Mobile mapping of methane urban concentrations was also used to identify significant sources and to show an urban-wide low level enhancement of methane levels. The residual difference between top-down and bottom-up measured emissions is large and cannot be fully explained in terms of the uncertainties in top-down and bottom-up emission measurements and estimates. Thus, the residual appears to be, at least partly, attributed to a significant wide-spread diffusive source. Analyses are included to estimate the size and nature of this diffusive source.

  7. Localization and separation of acoustic sources by using a 2.5-dimensional circular microphone array.

    PubMed

    Bai, Mingsian R; Lai, Chang-Sheng; Wu, Po-Chen

    2017-07-01

    Circular microphone arrays (CMAs) are sufficient in many immersive audio applications because azimuthal angles of sources are considered more important than the elevation angles in those occasions. However, the fact that CMAs do not resolve the elevation angle well can be a limitation for some applications which involves three-dimensional sound images. This paper proposes a 2.5-dimensional (2.5-D) CMA comprised of a CMA and a vertical logarithmic-spacing linear array (LLA) on the top. In the localization stage, two delay-and-sum beamformers are applied to the CMA and the LLA, respectively. The direction of arrival (DOA) is estimated from the product of two array output signals. In the separation stage, Tikhonov regularization and convex optimization are employed to extract the source amplitudes on the basis of the estimated DOA. The extracted signals from two arrays are further processed by the normalized least-mean-square algorithm with the internal iteration to yield the source signal with improved quality. To validate the 2.5-D CMA experimentally, a three-dimensionally printed circular array comprised of a 24-element CMA and an eight-element LLA is constructed. Objective perceptual evaluation of speech quality test and a subjective listening test are also undertaken.

  8. Exploiting simultaneous observational constraints on mass and absorption to estimate the global direct radiative forcing of black carbon and brown carbon

    NASA Astrophysics Data System (ADS)

    Wang, X.; Heald, C. L.; Ridley, D. A.; Schwarz, J. P.; Spackman, J. R.; Perring, A. E.; Coe, H.; Liu, D.; Clarke, A. D.

    2014-06-01

    Atmospheric black carbon (BC) is a leading climate warming agent, yet uncertainties on the global direct radiative forcing (DRF) remain large. Here we expand a global model simulation (GEOS-Chem) of BC to include the absorption enhancement associated with BC coating and separately treat both the aging and physical properties of fossil fuel and biomass burning BC. In addition we develop a global simulation of Brown Carbon (BrC) from both secondary (aromatic) and primary (biomass burning and biofuel) sources. The global mean lifetime of BC in this simulation (4.4 days) is substantially lower compared to the AeroCom I model means (7.3 days), and as a result, this model captures both the mass concentrations measured in near-source airborne field campaigns (ARCTAS, EUCAARI) and surface sites within 30%, and in remote regions (HIPPO) within a factor of two. We show that the new BC optical properties together with the inclusion of BrC reduces the model bias in Absorption Aerosol Optical Depth (AAOD) at multiple wavelengths by more than 50% at AERONET sites worldwide. However our improved model still underestimates AAOD by a factor of 1.4 to 2.8 regionally, with largest underestimates in regions influenced by fire. Using the RRTMG model integrated with GEOS-Chem we estimate that the all-sky top-of-atmosphere DRF of BC is +0.13 W m-2 (0.08 W m-2 from anthropogenic sources and 0.05 W m-2 from biomass burning). If we scale our model to match AERONET AAOD observations we estimate the DRF of BC is +0.21 W m-2, with an additional +0.11 W m-2 of warming from BrC. Uncertainties in size, optical properties, observations, and emissions suggest an overall uncertainty in BC DRF of -80% / +140%. Our estimates are at the lower end of the 0.2-1.0 W m-2 range from previous studies, and substantially less than the +0.6 W m-2 DRF estimated in the IPCC 5th Assessment Report. We suggest that the DRF of BC has previously been overestimated due to the overestimation of the BC lifetime and the incorrect attribution of BrC absorption to BC.

  9. Exploiting simultaneous observational constraints on mass and absorption to estimate the global direct radiative forcing of black carbon and brown carbon

    NASA Astrophysics Data System (ADS)

    Wang, X.; Heald, C. L.; Ridley, D. A.; Schwarz, J. P.; Spackman, J. R.; Perring, A. E.; Coe, H.; Liu, D.; Clarke, A. D.

    2014-10-01

    Atmospheric black carbon (BC) is a leading climate warming agent, yet uncertainties on the global direct radiative forcing (DRF) remain large. Here we expand a global model simulation (GEOS-Chem) of BC to include the absorption enhancement associated with BC coating and separately treat both the aging and physical properties of fossil-fuel and biomass-burning BC. In addition we develop a global simulation of brown carbon (BrC) from both secondary (aromatic) and primary (biomass burning and biofuel) sources. The global mean lifetime of BC in this simulation (4.4 days) is substantially lower compared to the AeroCom I model means (7.3 days), and as a result, this model captures both the mass concentrations measured in near-source airborne field campaigns (ARCTAS, EUCAARI) and surface sites within 30%, and in remote regions (HIPPO) within a factor of 2. We show that the new BC optical properties together with the inclusion of BrC reduces the model bias in absorption aerosol optical depth (AAOD) at multiple wavelengths by more than 50% at AERONET sites worldwide. However our improved model still underestimates AAOD by a factor of 1.4 to 2.8 regionally, with the largest underestimates in regions influenced by fire. Using the RRTMG model integrated with GEOS-Chem we estimate that the all-sky top-of-atmosphere DRF of BC is +0.13 Wm-2 (0.08 Wm-2 from anthropogenic sources and 0.05 Wm-2 from biomass burning). If we scale our model to match AERONET AAOD observations we estimate the DRF of BC is +0.21 Wm-2, with an additional +0.11 Wm-2 of warming from BrC. Uncertainties in size, optical properties, observations, and emissions suggest an overall uncertainty in BC DRF of -80%/+140%. Our estimates are at the lower end of the 0.2-1.0 Wm-2 range from previous studies, and substantially less than the +0.6 Wm-2 DRF estimated in the IPCC 5th Assessment Report. We suggest that the DRF of BC has previously been overestimated due to the overestimation of the BC lifetime (including the effect on the vertical profile) and the incorrect attribution of BrC absorption to BC.

  10. DCMDN: Deep Convolutional Mixture Density Network

    NASA Astrophysics Data System (ADS)

    D'Isanto, Antonio; Polsterer, Kai Lars

    2017-09-01

    Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.

  11. Global inverse modeling of CH4 sources and sinks: an overview of methods

    NASA Astrophysics Data System (ADS)

    Houweling, Sander; Bergamaschi, Peter; Chevallier, Frederic; Heimann, Martin; Kaminski, Thomas; Krol, Maarten; Michalak, Anna M.; Patra, Prabir

    2017-01-01

    The aim of this paper is to present an overview of inverse modeling methods that have been developed over the years for estimating the global sources and sinks of CH4. It provides insight into how techniques and estimates have evolved over time and what the remaining shortcomings are. As such, it serves a didactical purpose of introducing apprentices to the field, but it also takes stock of developments so far and reflects on promising new directions. The main focus is on methodological aspects that are particularly relevant for CH4, such as its atmospheric oxidation, the use of methane isotopologues, and specific challenges in atmospheric transport modeling of CH4. The use of satellite retrievals receives special attention as it is an active field of methodological development, with special requirements on the sampling of the model and the treatment of data uncertainty. Regional scale flux estimation and attribution is still a grand challenge, which calls for new methods capable of combining information from multiple data streams of different measured parameters. A process model representation of sources and sinks in atmospheric transport inversion schemes allows the integrated use of such data. These new developments are needed not only to improve our understanding of the main processes driving the observed global trend but also to support international efforts to reduce greenhouse gas emissions.

  12. Monte Carlo Approach for Estimating Density and Atomic Number From Dual-Energy Computed Tomography Images of Carbonate Rocks

    NASA Astrophysics Data System (ADS)

    Victor, Rodolfo A.; Prodanović, Maša.; Torres-Verdín, Carlos

    2017-12-01

    We develop a new Monte Carlo-based inversion method for estimating electron density and effective atomic number from 3-D dual-energy computed tomography (CT) core scans. The method accounts for uncertainties in X-ray attenuation coefficients resulting from the polychromatic nature of X-ray beam sources of medical and industrial scanners, in addition to delivering uncertainty estimates of inversion products. Estimation of electron density and effective atomic number from CT core scans enables direct deterministic or statistical correlations with salient rock properties for improved petrophysical evaluation; this condition is specifically important in media such as vuggy carbonates where CT resolution better captures core heterogeneity that dominates fluid flow properties. Verification tests of the inversion method performed on a set of highly heterogeneous carbonate cores yield very good agreement with in situ borehole measurements of density and photoelectric factor.

  13. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  14. Multi-scale modeling of irradiation effects in spallation neutron source materials

    NASA Astrophysics Data System (ADS)

    Yoshiie, T.; Ito, T.; Iwase, H.; Kaneko, Y.; Kawai, M.; Kishida, I.; Kunieda, S.; Sato, K.; Shimakawa, S.; Shimizu, F.; Hashimoto, S.; Hashimoto, N.; Fukahori, T.; Watanabe, Y.; Xu, Q.; Ishino, S.

    2011-07-01

    Changes in mechanical property of Ni under irradiation by 3 GeV protons were estimated by multi-scale modeling. The code consisted of four parts. The first part was based on the Particle and Heavy-Ion Transport code System (PHITS) code for nuclear reactions, and modeled the interactions between high energy protons and nuclei in the target. The second part covered atomic collisions by particles without nuclear reactions. Because the energy of the particles was high, subcascade analysis was employed. The direct formation of clusters and the number of mobile defects were estimated using molecular dynamics (MD) and kinetic Monte-Carlo (kMC) methods in each subcascade. The third part considered damage structural evolutions estimated by reaction kinetic analysis. The fourth part involved the estimation of mechanical property change using three-dimensional discrete dislocation dynamics (DDD). Using the above four part code, stress-strain curves for high energy proton irradiated Ni were obtained.

  15. Human health risks related to the consumption of foodstuffs of animal origin contaminated by bisphenol A.

    PubMed

    Gorecki, Sébastien; Bemrah, Nawel; Roudot, Alain-Claude; Marchioni, Eric; Le Bizec, Bruno; Faivre, Franck; Kadawathagedara, Manik; Botton, Jérémie; Rivière, Gilles

    2017-12-01

    Bisphenol A (BPA) is used in a wide variety of products and objects for consumers use (digital media such as CD's and DVD's, sport equipment, food and beverage containers, medical equipment). For humans, the main route of exposure to BPA is food. Based on previous estimates, almost 20% of the dietary exposure to BPA in the French population would be from food of animal origin. However, due to the use of composite samples, the source of the contamination had not been identified. Therefore, 322 individual samples of non-canned foods of animal origin were collected with the objectives of first updating the estimation of the exposure of the French population and second identifying the source of contamination of these foodstuffs using a specific analytical method. Compared to previous estimates in France, a decline in the contamination of the samples was observed, in particular with regard to meat. The estimated mean dietary exposures ranged from 0.048 to 0.050 μg (kg bw) -1 d -1 for 3-17 year children and adolescents, from 0.034 to 0.035 μg (kg bw) -1 d -1 for adults and from 0.047 to 0.049 μg (kg bw) -1 d -1 for pregnant women. The contribution of meat to total dietary exposure of pregnant women, adults and children was up to three times lower than the previous estimates. Despite this downward trend in contamination, the toxicological values were observed to have been exceeded for the population of pregnant women. With the aim of acquiring more knowledge about the origin the potential source(s) of contamination of non-canned foods of animal origin, a specific analytical method was developed to directly identify and quantify the presence of conjugated BPA (BPA-monoglucuronide, BPA-diglucuronide and sulphate forms) in 50 samples. No conjugated forms of BPAs were detected in the analysed samples, indicating clearly that BPA content in animal food was not due to metabolism but arise post mortem in food. This contamination may occur during food production. However, despite extensive sampling performed in several different shops (butcheries, supermarkets …. ) and in different conditions (fresh, prepared, frozen …), the source(s) of the contamination could not be specifically identified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  17. Compressive spherical beamforming for localization of incipient tip vortex cavitation.

    PubMed

    Choo, Youngmin; Seong, Woojae

    2016-12-01

    Noises by incipient propeller tip vortex cavitation (TVC) are generally generated at regions near the propeller tip. Localization of these sparse noises is performed using compressive sensing (CS) with measurement data from cavitation tunnel experiments. Since initial TVC sound radiates in all directions as a monopole source, a sensing matrix for CS is formulated by adopting spherical beamforming. CS localization is examined with known source acoustic measurements, where the CS estimated source position coincides with the known source position. Afterwards, CS is applied to initial cavitation noise cases. The result of cavitation localization was detected near the upper downstream area of the propeller and showed less ambiguity compared to Bartlett spherical beamforming. Standard constraint in CS was modified by exploiting the physical features of cavitation to suppress remaining ambiguity. CS localization of TVC using the modified constraint is shown according to cavitation numbers and compared to high-speed camera images.

  18. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  19. Impact of emissions from natural gas production facilities on ambient air quality in the Barnett Shale area: a pilot study.

    PubMed

    Zielinska, Barbara; Campbell, Dave; Samburova, Vera

    2014-12-01

    Rapid and extensive development of shale gas resources in the Barnett Shale region of Texas in recent years has created concerns about potential environmental impacts on water and air quality. The purpose of this study was to provide a better understanding of the potential contributions of emissions from gas production operations to population exposure to air toxics in the Barnett Shale region. This goal was approached using a combination of chemical characterization of the volatile organic compound (VOC) emissions from active wells, saturation monitoring for gaseous and particulate pollutants in a residential community located near active gas/oil extraction and processing facilities, source apportionment of VOCs measured in the community using the Chemical Mass Balance (CMB) receptor model, and direct measurements of the pollutant gradient downwind of a gas well with high VOC emissions. Overall, the study results indicate that air quality impacts due to individual gas wells and compressor stations are not likely to be discernible beyond a distance of approximately 100 m in the downwind direction. However, source apportionment results indicate a significant contribution to regional VOCs from gas production sources, particularly for lower-molecular-weight alkanes (< C6). Although measured ambient VOC concentrations were well below health-based safe exposure levels, the existence of urban-level mean concentrations of benzene and other mobile source air toxics combined with soot to total carbon ratios that were high for an area with little residential or commercial development may be indicative of the impact of increased heavy-duty vehicle traffic related to gas production. Implications: Rapid and extensive development of shale gas resources in recent years has created concerns about potential environmental impacts on water and air quality. This study focused on directly measuring the ambient air pollutant levels occurring at residential properties located near natural gas extraction and processing facilities, and estimating the relative contributions from gas production and motor vehicle emissions to ambient VOC concentrations. Although only a small-scale case study, the results may be useful for guidance in planning future ambient air quality studies and human exposure estimates in areas of intensive shale gas production.

  20. Comparison of the South Florida Natural System Model with Pre-canal Everglades Hydrology Estimated from Historical Sources

    USGS Publications Warehouse

    McVoy, Christopher; Park, Winifred A.; Obeysekera, Jayantha

    1996-01-01

    Preservation and restoration of the remaining Everglades ecosystem is focussed on two aspects: improving upstream water quality and improving 'hydropatterns' - the timing, depth and flow of surface water. Restoration of hydropatterns requires knowledge of the original pre-canal drainage conditions as well as an understanding of the soil, topo-graphic, and vegetation changes that have taken place since canal drainage began in the 1880's. The Natural System Model (NSM), developed by the South Florida Water Management District (SFWMD) and Everglades National Park, uses estimates of pre-drainage vegetation and topography to estimate the pre-drainage hydrologic response of the Everglades. Sources of model uncertainty include: (1) the algorithms, (2) the parameters (particularly those relating to vegetation roughness and evapotranspiration), and (3) errors in the assumed pre-drainage vegetation distribution and pre-drainage topography. Other studies are concentrating on algorithmic and parameter sources of uncertainty. In this study we focus on the NSM output -- predicted hydropattern -- and evaluate this by comparison with all available direct and indirect information on pre-drainage hydropatterns. The unpublished and published literature is being searched exhaustively for observations of water depth, flow direction, flow velocity and hydroperiod, during the period prior and just after drainage (1840-1920). Additionally, a comprehensive map of soils in the Everglades region, prepared in the 1940's by personnel from the University of Florida Agricultural Experiment Station, the U.S. Soil Conservation Service, the U.S. Geological Survey, and the Everglades Drainage District, is being used to identify wetland soils and to infer the spatial distribution of pre-drainage hydrologic conditions. Detailed study of this map and other early soil and vegetation maps in light of the history of drainage activities will reveal patterns of change and possible errors in the input to the NSM. Changes in the wetland soils are important because of their effects on topography (soil subsidence) and in their role as indicators of hydropattern.

  1. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Neutron generator for BNCT based on high current ECR ion source with gyrotron plasma heating.

    PubMed

    Skalyga, V; Izotov, I; Golubev, S; Razin, S; Sidorov, A; Maslennikova, A; Volovecky, A; Kalvas, T; Koivisto, H; Tarvainen, O

    2015-12-01

    BNCT development nowadays is constrained by a progress in neutron sources design. Creation of a cheap and compact intense neutron source would significantly simplify trial treatments avoiding use of expensive and complicated nuclear reactors and accelerators. D-D or D-T neutron generator is one of alternative types of such sources for. A so-called high current quasi-gasdynamic ECR ion source with plasma heating by millimeter wave gyrotron radiation is suggested to be used in a scheme of D-D neutron generator in the present work. Ion source of that type was developed in the Institute of Applied Physics of Russian Academy of Sciences (Nizhny Novgorod, Russia). It can produce deuteron ion beams with current density up to 700-800 mA/cm(2). Generation of the neutron flux with density at the level of 7-8·10(10) s(-1) cm(-2) at the target surface could be obtained in case of TiD2 target bombardment with deuteron beam accelerated to 100 keV. Estimations show that it is enough for formation of epithermal neutron flux with density higher than 10(9) s(-1) cm(-2) suitable for BNCT. Important advantage of described approach is absence of Tritium in the scheme. First experiments performed in pulsed regime with 300 mA, 45 kV deuteron beam directed to D2O target demonstrated 10(9) s(-1) neutron flux. This value corresponds to theoretical estimations and proofs prospects of neutron generator development based on high current quasi-gasdynamic ECR ion source. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Finding and estimating chemical property data for environmental assessment.

    PubMed

    Boethling, Robert S; Howard, Philip H; Meylan, William M

    2004-10-01

    The ability to predict the behavior of a chemical substance in a biological or environmental system largely depends on knowledge of the physicochemical properties and reactivity of that substance. We focus here on properties, with the objective of providing practical guidance for finding measured values and using estimation methods when necessary. Because currently available computer software often makes it more convenient to estimate than to retrieve measured values, we try to discourage irrational exuberance for these tools by including comprehensive lists of Internet and hard-copy data resources. Guidance for assessors is presented in the form of a process to obtain data that includes establishment of chemical identity, identification of data sources, assessment of accuracy and reliability, substructure searching for analogs when experimental data are unavailable, and estimation from chemical structure. Regarding property estimation, we cover estimation from close structural analogs in addition to broadly applicable methods requiring only the chemical structure. For the latter, we list and briefly discuss the most widely used methods. Concluding thoughts are offered concerning appropriate directions for future work on estimation methods, again with an emphasis on practical applications.

  4. Rule-Based Flight Software Cost Estimation

    NASA Technical Reports Server (NTRS)

    Stukes, Sherry A.; Spagnuolo, John N. Jr.

    2015-01-01

    This paper discusses the fundamental process for the computation of Flight Software (FSW) cost estimates. This process has been incorporated in a rule-based expert system [1] that can be used for Independent Cost Estimates (ICEs), Proposals, and for the validation of Cost Analysis Data Requirements (CADRe) submissions. A high-level directed graph (referred to here as a decision graph) illustrates the steps taken in the production of these estimated costs and serves as a basis of design for the expert system described in this paper. Detailed discussions are subsequently given elaborating upon the methodology, tools, charts, and caveats related to the various nodes of the graph. We present general principles for the estimation of FSW using SEER-SEM as an illustration of these principles when appropriate. Since Source Lines of Code (SLOC) is a major cost driver, a discussion of various SLOC data sources for the preparation of the estimates is given together with an explanation of how contractor SLOC estimates compare with the SLOC estimates used by JPL. Obtaining consistency in code counting will be presented as well as factors used in reconciling SLOC estimates from different code counters. When sufficient data is obtained, a mapping into the JPL Work Breakdown Structure (WBS) from the SEER-SEM output is illustrated. For across the board FSW estimates, as was done for the NASA Discovery Mission proposal estimates performed at JPL, a comparative high-level summary sheet for all missions with the SLOC, data description, brief mission description and the most relevant SEER-SEM parameter values is given to illustrate an encapsulation of the used and calculated data involved in the estimates. The rule-based expert system described provides the user with inputs useful or sufficient to run generic cost estimation programs. This system's incarnation is achieved via the C Language Integrated Production System (CLIPS) and will be addressed at the end of this paper.

  5. Contribution of Brown Carbon to Direct Radiative Forcing over the Indo-Gangetic Plain.

    PubMed

    Shamjad, P M; Tripathi, S N; Pathak, Ravi; Hallquist, M; Arola, Antti; Bergin, M H

    2015-09-01

    The Indo-Gangetic Plain is a region of known high aerosol loading with substantial amounts of carbonaceous aerosols from a variety of sources, often dominated by biomass burning. Although black carbon has been shown to play an important role in the absorption of solar energy and hence direct radiative forcing (DRF), little is known regarding the influence of light absorbing brown carbon (BrC) on the radiative balance in the region. With this in mind, a study was conducted for a one month period during the winter-spring season of 2013 in Kanpur, India that measured aerosol chemical and physical properties that were used to estimate the sources of carbonaceous aerosols, as well as parameters necessary to estimate direct forcing by aerosols and the contribution of BrC absorption to the atmospheric energy balance. Positive matrix factorization analyses, based on aerosol mass spectrometer measurements, resolved organic carbon into four factors including low-volatile oxygenated organic aerosols, semivolatile oxygenated organic aerosols, biomass burning, and hydrocarbon like organic aerosols. Three-wavelength absorption and scattering coefficient measurements from a Photo Acoustic Soot Spectrometer were used to estimate aerosol optical properties and estimate the relative contribution of BrC to atmospheric absorption. Mean ± standard deviation values of short-wave cloud free clear sky DRF exerted by total aerosols at the top of atmosphere, surface and within the atmospheric column are -6.1 ± 3.2, -31.6 ± 11, and 25.5 ± 10.2 W/m(2), respectively. During days dominated by biomass burning the absorption of solar energy by aerosols within the atmosphere increased by ∼35%, accompanied by a 25% increase in negative surface DRF. DRF at the top of atmosphere during biomass burning days decreased in negative magnitude by several W/m(2) due to enhanced atmospheric absorption by biomass aerosols, including BrC. The contribution of BrC to atmospheric absorption is estimated to range from on average 2.6 W/m(2) for typical ambient conditions to 3.6 W/m(2) during biomass burning days. This suggests that BrC accounts for 10-15% of the total aerosol absorption in the atmosphere, indicating that BrC likely plays an important role in surface and boundary temperature as well as climate.

  6. Laser-ablation-based ion source characterization and manipulation for laser-driven ion acceleration

    NASA Astrophysics Data System (ADS)

    Sommer, P.; Metzkes-Ng, J.; Brack, F.-E.; Cowan, T. E.; Kraft, S. D.; Obst, L.; Rehwald, M.; Schlenvoigt, H.-P.; Schramm, U.; Zeil, K.

    2018-05-01

    For laser-driven ion acceleration from thin foils (∼10 μm–100 nm) in the target normal sheath acceleration regime, the hydro-carbon contaminant layer at the target surface generally serves as the ion source and hence determines the accelerated ion species, i.e. mainly protons, carbon and oxygen ions. The specific characteristics of the source layer—thickness and relevant lateral extent—as well as its manipulation have both been investigated since the first experiments on laser-driven ion acceleration using a variety of techniques from direct source imaging to knife-edge or mesh imaging. In this publication, we present an experimental study in which laser ablation in two fluence regimes (low: F ∼ 0.6 J cm‑2, high: F ∼ 4 J cm‑2) was applied to characterize and manipulate the hydro-carbon source layer. The high-fluence ablation in combination with a timed laser pulse for particle acceleration allowed for an estimation of the relevant source layer thickness for proton acceleration. Moreover, from these data and independently from the low-fluence regime, the lateral extent of the ion source layer became accessible.

  7. Multitaper scan-free spectrum estimation using a rotational shear interferometer.

    PubMed

    Lepage, Kyle; Thomson, David J; Kraut, Shawn; Brady, David J

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9 degrees from a source with a SNR of 70.1, with a significance level of 10(-4), approximately 4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  8. Multitaper scan-free spectrum estimation using a rotational shear interferometer

    NASA Astrophysics Data System (ADS)

    Lepage, Kyle; Thomson, David J.; Kraut, Shawn; Brady, David J.

    2006-05-01

    Multitaper methods for a scan-free spectrum estimation that uses a rotational shear interferometer are investigated. Before source spectra can be estimated the sources must be detected. A source detection algorithm based upon the multitaper F-test is proposed. The algorithm is simulated, with additive, white Gaussian detector noise. A source with a signal-to-noise ratio (SNR) of 0.71 is detected 2.9° from a source with a SNR of 70.1, with a significance level of 10-4, ˜4 orders of magnitude more significant than the source detection obtained with a standard detection algorithm. Interpolation and the use of prewhitening filters are investigated in the context of rotational shear interferometer (RSI) source spectra estimation. Finally, a multitaper spectrum estimator is proposed, simulated, and compared with untapered estimates. The multitaper estimate is found via simulation to distinguish a spectral feature with a SNR of 1.6 near a large spectral feature. The SNR of 1.6 spectral feature is not distinguished by the untapered spectrum estimate. The findings are consistent with the strong capability of the multitaper estimate to reduce out-of-band spectral leakage.

  9. Visual perception enhancement for detection of cancerous oral tissue by multi-spectral imaging

    NASA Astrophysics Data System (ADS)

    Wang, Hsiang-Chen; Tsai, Meng-Tsan; Chiang, Chun-Ping

    2013-05-01

    Color reproduction systems based on the multi-spectral imaging technique (MSI) for both directly estimating reflection spectra and direct visualization of oral tissues using various light sources are proposed. Images from three oral cancer patients were taken as the experimental samples, and spectral differences between pre-cancerous and normal oral mucosal tissues were calculated at three time points during 5-aminolevulinic acid photodynamic therapy (ALA-PDT) to analyze whether they were consistent with disease processes. To check the successful treatment of oral cancer with ALA-PDT, oral cavity images by swept source optical coherence tomography (SS-OCT) are demonstrated. This system can also reproduce images under different light sources. For pre-cancerous detection, the oral images after the second ALA-PDT are assigned as the target samples. By using RGB LEDs with various correlated color temperatures (CCTs) for color difference comparison, the light source with a CCT of about 4500 K was found to have the best ability to enhance the color difference between pre-cancerous and normal oral mucosal tissues in the oral cavity. Compared with the fluorescent lighting commonly used today, the color difference can be improved by 39.2% from 16.5270 to 23.0023. Hence, this light source and spectral analysis increase the efficiency of the medical diagnosis of oral cancer and aid patients in receiving early treatment.

  10. Men and Arms in the Middle East: The Human Factor in Military Modernization

    DTIC Science & Technology

    1979-06-01

    countries under study supports their abilities to wield military power effectively, their large- scale reliance on importation of military technologies...statistics, and on quality from area experts. In many cases , we were unable to arrive at numerical estimates of the sources of supply. . Likely future...government agencies); on -the-job training (as in the case of counterpart pro- grams); and thle direct importation of both military and civilian labor

  11. Estimations of Atmospheric Conditions for Input to the Radar Performance Surface

    DTIC Science & Technology

    2007-12-01

    timely atmospheric and ocean surface descriptions on features that impact radar and electro-optical sensor systems . The first part of this study is an...Navy’s Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®) are compared to in-situ data to assess the sensitivities of air-sea...temperature measurements to make direct comparisons to the Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®) as a prime source of input to the

  12. The BlueSky Smoke Modeling Framework: Recent Developments

    NASA Astrophysics Data System (ADS)

    Sullivan, D. C.; Larkin, N.; Raffuse, S. M.; Strand, T.; ONeill, S. M.; Leung, F. T.; Qu, J. J.; Hao, X.

    2012-12-01

    BlueSky systems—a set of decision support tools including SmartFire and the BlueSky Framework—aid public policy decision makers and scientific researchers in evaluating the air quality impacts of fires. Smoke and fire managers use BlueSky systems in decisions about prescribed burns and wildland firefighting. Air quality agencies use BlueSky systems to support decisions related to air quality regulations. We will discuss a range of recent improvements to the BlueSky systems, as well as examples of applications and future plans. BlueSky systems have the flexibility to accept basic fire information from virtually any source and can reconcile multiple information sources so that duplication of fire records is eliminated. BlueSky systems currently apply information from (1) the National Oceanic and Atmospheric Administration's (NOAA) Hazard Mapping System (HMS), which represents remotely sensed data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Advanced Very High Resolution Radiometer (AVHRR), and Geostationary Operational Environmental Satellites (GOES); (2) the Monitoring Trends in Burn Severity (MTBS) interagency project, which derives fire perimeters from Landsat 30-meter burn scars; (3) the Geospatial Multi-Agency Coordination Group (GeoMAC), which produces helicopter-flown burn perimeters; and (4) ground-based fire reports, such as the ICS-209 reports managed by the National Wildfire Coordinating Group. Efforts are currently underway to streamline the use of additional ground-based systems, such as states' prescribed burn databases. BlueSky systems were recently modified to address known uncertainties in smoke modeling associated with (1) estimates of biomass consumption derived from sparse fuel moisture data, and (2) models of plume injection heights. Additional sources of remotely sensed data are being applied to address these issues as follows: - The National Aeronautics and Space Administration's (NASA) Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis Real-Time (TMPA-RT) data set is being used to improve dead fuel moisture estimates. - EastFire live fuel moisture estimates, which are derived from NASA's MODIS direct broadcast, are being used to improve live fuel moisture estimates. - NASA's Multi-angle Imaging Spectroradiometer (MISR) stereo heights are being used to improve estimates of plume injection heights. Further, the Fire Location and Modeling of Burning Emissions (FLAMBÉ) model was incorporated into the BlueSky Framework as an alternative means of calculating fire emissions. FLAMBÉ directly estimates emissions on the basis of fire detections and radiance measures from NASA's MODIS and NOAA's GOES satellites. (The authors gratefully acknowledge NASA's Applied Sciences Program [Grant Nos. NN506AB52A and NNX09AV76G)], the USDA Forest Service, and the Joint Fire Science Program for their support.)

  13. Climatic Effects of 1950-2050 Changes in US Anthropogenic Aerosols. Part 1; Aerosol Trends and Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Leibensperger, E. M.; Mickley, L. J.; Jacob, D. J.; Chen, W.-T.; Seinfeld, J. H.; Nenes, A.; Adams, P. J.; Streets, D. G.; Kumar, N.; Rind, D.

    2012-01-01

    We calculate decadal aerosol direct and indirect (warm cloud) radiative forcings from US anthropogenic sources over the 1950-2050 period. Past and future aerosol distributions are constructed using GEOS-Chem and historical emission inventories and future projections from the IPCC A1B scenario. Aerosol simulations are evaluated with observed spatial distributions and 1980-2010 trends of aerosol concentrations and wet deposition in the contiguous US. Direct and indirect radiative forcing is calculated using the GISS general circulation model and monthly mean aerosol distributions from GEOS-Chem. The radiative forcing from US anthropogenic aerosols is strongly localized over the eastern US. We find that its magnitude peaked in 1970-1990, with values over the eastern US (east of 100 deg W) of -2.0Wm(exp-2 for direct forcing including contributions from sulfate (-2.0Wm-2), nitrate (-0.2Wm(exp-2), organic carbon (-0.2Wm(exp-2), and black carbon (+0.4Wm(exp-2). The uncertainties in radiative forcing due to aerosol radiative properties are estimated to be about 50 %. The aerosol indirect effect is estimated to be of comparable magnitude to the direct forcing. We find that the magnitude of the forcing declined sharply from 1990 to 2010 (by 0.8Wm(exp-2) direct and 1.0Wm(exp-2 indirect), mainly reflecting decreases in SO2 emissions, and project that it will continue declining post-2010 but at a much slower rate since US SO2 emissions have already declined by almost 60% from their peak. This suggests that much of the warming effect of reducing US anthropogenic aerosol sources has already been realized. The small positive radiative forcing from US BC emissions (+0.3Wm(exp-2 over the eastern US in 2010; 5% of the global forcing from anthropogenic BC emissions worldwide) suggests that a US emission control strategy focused on BC would have only limited climate benefit.

  14. Estimating Rupture Directivity of Aftershocks of the 2014 Mw8.1 Iquique Earthquake, Northern Chile

    NASA Astrophysics Data System (ADS)

    Folesky, Jonas; Kummerow, Jörn; Timann, Frederik; Shapiro, Serge

    2017-04-01

    The 2014 Mw8.1 Iquique earthquake was accompanied by numerous fore- and aftershocks of magnitudes up to M ˜ 7.6. While the rupture processes of the main event and its largest aftershock were already analysed in great detail, this study focusses on the rupture processes of about 230 smaller aftershocks that occurred during the first two days after the main event. Since the events are of magnitudes 4.0 ≤ M ≤ 6.5 it is not trivial which method is most suitable. Thus we apply and compare here three different approaches attempting to extract a possible rupture directivity for each single event. The seismic broadband recordings of the Integrated Plate Boundary Observatory Chile (IPOC) provide an excellent database for our analysis. Their high sampling rate (100 Hz) and a well distributed station selection that cover an aperture of about 180 ° are a great advantage for a thorough directivity analysis. First, we apply a P wave polarization analysis (PPA) where we reconstruct the direction of the incoming wave-field by covariance analysis of the first particle motions. Combined with a sliding time window the results from different stations are capable of identifying first the hypocentre of the events and also a migration of the rupture front, if the event is of unilateral character. A second approach is the back projection imaging (BPI) technique, which illuminates the rupture path by back-projecting the recorded seismic energy to its source. A propagating rupture front would be reconstructed from the migration of the zone of high constructive amplitude stacks. In a third step we apply the empirical Green's function (EGF) method, where events of high waveform similarity, hence co-located and of similar mechanisms, are selected in order to use the smaller event as the Green's function of the larger event. This approach results in an estimated source time function, which is compared station wise and whose azimuthal variations are analysed for complexities and directivity.

  15. Use of direct versus indirect preparation data for assessing risk associated with airborne exposures at asbestos-contaminated sites.

    PubMed

    Goldade, Mary Patricia; O'Brien, Wendy Pott

    2014-01-01

    At asbestos-contaminated sites, exposure assessment requires measurement of airborne asbestos concentrations; however, the choice of preparation steps employed in the analysis has been debated vigorously among members of the asbestos exposure and risk assessment communities for many years. This study finds that the choice of preparation technique used in estimating airborne amphibole asbestos exposures for risk assessment is generally not a significant source of uncertainty. Conventionally, the indirect preparation method has been less preferred by some because it is purported to result in false elevations in airborne asbestos concentrations, when compared to direct analysis of air filters. However, airborne asbestos sampling in non-occupational settings is challenging because non-asbestos particles can interfere with the asbestos measurements, sometimes necessitating analysis via indirect preparation. To evaluate whether exposure concentrations derived from direct versus indirect preparation techniques differed significantly, paired measurements of airborne Libby-type amphibole, prepared using both techniques, were compared. For the evaluation, 31 paired direct and indirect preparations originating from the same air filters were analyzed for Libby-type amphibole using transmission electron microscopy. On average, the total Libby-type amphibole airborne exposure concentration was 3.3 times higher for indirect preparation analysis than for its paired direct preparation analysis (standard deviation = 4.1), a difference which is not statistically significant (p = 0.12, two-tailed, Wilcoxon signed rank test). The results suggest that the magnitude of the difference may be larger for shorter particles. Overall, neither preparation technique (direct or indirect) preferentially generates more precise and unbiased data for airborne Libby-type amphibole concentration estimates. The indirect preparation method is reasonable for estimating Libby-type amphibole exposure and may be necessary given the challenges of sampling in environmental settings. Relative to the larger context of uncertainties inherent in the risk assessment process, uncertainties associated with the use of airborne Libby-type amphibole exposure measurements derived from indirect preparation analysis are low. Use of exposure measurements generated by either direct or indirect preparation analyses is reasonable to estimate Libby-type Amphibole exposures in a risk assessment.

  16. Inverse modeling of the Chernobyl source term using atmospheric concentration and deposition measurements

    NASA Astrophysics Data System (ADS)

    Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas

    2017-07-01

    This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.

  17. Estimating forest and woodland aboveground biomass using active and passive remote sensing

    USGS Publications Warehouse

    Wu, Zhuoting; Dye, Dennis G.; Vogel, John M.; Middleton, Barry R.

    2016-01-01

    Aboveground biomass was estimated from active and passive remote sensing sources, including airborne lidar and Landsat-8 satellites, in an eastern Arizona (USA) study area comprised of forest and woodland ecosystems. Compared to field measurements, airborne lidar enabled direct estimation of individual tree height with a slope of 0.98 (R2 = 0.98). At the plot-level, lidar-derived height and intensity metrics provided the most robust estimate for aboveground biomass, producing dominant species-based aboveground models with errors ranging from 4 to 14Mg ha –1 across all woodland and forest species. Landsat-8 imagery produced dominant species-based aboveground biomass models with errors ranging from 10 to 28 Mg ha –1. Thus, airborne lidar allowed for estimates for fine-scale aboveground biomass mapping with low uncertainty, while Landsat-8 seems best suited for broader spatial scale products such as a national biomass essential climate variable (ECV) based on land cover types for the United States.

  18. Single-snapshot DOA estimation by using Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Fortunati, Stefano; Grasso, Raffaele; Gini, Fulvio; Greco, Maria S.; LePage, Kevin

    2014-12-01

    This paper deals with the problem of estimating the directions of arrival (DOA) of multiple source signals from a single observation vector of an array data. In particular, four estimation algorithms based on the theory of compressed sensing (CS), i.e., the classical ℓ 1 minimization (or Least Absolute Shrinkage and Selection Operator, LASSO), the fast smooth ℓ 0 minimization, and the Sparse Iterative Covariance-Based Estimator, SPICE and the Iterative Adaptive Approach for Amplitude and Phase Estimation, IAA-APES algorithms, are analyzed, and their statistical properties are investigated and compared with the classical Fourier beamformer (FB) in different simulated scenarios. We show that unlike the classical FB, a CS-based beamformer (CSB) has some desirable properties typical of the adaptive algorithms (e.g., Capon and MUSIC) even in the single snapshot case. Particular attention is devoted to the super-resolution property. Theoretical arguments and simulation analysis provide evidence that a CS-based beamformer can achieve resolution beyond the classical Rayleigh limit. Finally, the theoretical findings are validated by processing a real sonar dataset.

  19. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    NASA Technical Reports Server (NTRS)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  20. Burden of disease associated with cervical cancer in malaysia and potential costs and consequences of HPV vaccination.

    PubMed

    Aljunid, S; Zafar, A; Saperi, S; Amrizal, M

    2010-01-01

    An estimated 70% of cervical cancers worldwide are attributable to persistent infection with human papillomaviruses (HPV) 16 and 18. Vaccination against HPV 16/18 has been shown to dramatically reduce the incidence of associated precancerous and cancerous lesions. The aims of the present analyses were, firstly, to estimate the clinical and economic burden of disease attributable to HPV in Malaysia and secondly, to estimate long-term outcomes associated with HPV vaccination using a prevalence-based modeling approach. In the first part of the analysis costs attributable to cervical cancer and precancerous lesions were estimated; epidemiologic data were sourced from the WHO GLOBOCAN database and Malaysian national data sources. In the second part, a prevalence-based model was used to estimate the potential annual number of cases of cervical cancer and precancerous lesions that could be prevented and subsequent HPV-related treatment costs averted with the bivalent (HPV 16/18) and the quadrivalent (HPV 16/18/6/11) vaccines, at the population level, at steady state. A vaccine efficacy of 98% was assumed against HPV types included in both vaccines. Effectiveness against other oncogenic HPV types was based on the latest results from each vaccine's respective clinical trials. In Malaysia there are an estimated 4,696 prevalent cases of cervical cancer annually and 1,372 prevalent cases of precancerous lesions, which are associated with a total direct cost of RM 39.2 million with a further RM 12.4 million in indirect costs owing to lost productivity. At steady state, vaccination with the bivalent vaccine was estimated to prevent 4,199 cervical cancer cases per year versus 3,804 cases for the quadrivalent vaccine. Vaccination with the quadrivalent vaccine was projected to prevent 1,721 cases of genital warts annually, whereas the annual number of cases remained unchanged with the bivalent vaccine. Furthermore, vaccination with the bivalent vaccine was estimated to avert RM 45.4 million in annual HPV-related treatment costs (direct+indirect) compared with RM 42.9 million for the quadrivalent vaccine. This analysis showed that vaccination against HPV 16/18 can reduce the clinical and economic burden of cervical cancer and precancerous lesions in Malaysia. The greatest potential economic benefit was observed using the bivalent vaccine in preference to the quadrivalent vaccine.

  1. Separation of simultaneous sources using a structural-oriented median filter in the flattened dimension

    NASA Astrophysics Data System (ADS)

    Gan, Shuwei; Wang, Shoudong; Chen, Yangkang; Chen, Xiaohong; Xiang, Kui

    2016-01-01

    Simultaneous-source shooting can help tremendously shorten the acquisition period and improve the quality of seismic data for better subsalt seismic imaging, but at the expense of introducing strong interference (blending noise) to the acquired seismic data. We propose to use a structural-oriented median filter to attenuate the blending noise along the structural direction of seismic profiles. The principle of the proposed approach is to first flatten the seismic record in local spatial windows and then to apply a traditional median filter (MF) to the third flattened dimension. The key component of the proposed approach is the estimation of the local slope, which can be calculated by first scanning the NMO velocity and then transferring the velocity to the local slope. Both synthetic and field data examples show that the proposed approach can successfully separate the simultaneous-source data into individual sources. We provide an open-source toy example to better demonstratethe proposed methodology.

  2. Small-Scale Gravity Waves in ER-2 MMS/MTP Wind and Temperature Measurements during CRYSTAL-FACE

    NASA Technical Reports Server (NTRS)

    Wang, L.; Alexander, M. J.; Bui, T. P.; Mahoney, M. J.

    2006-01-01

    Lower stratospheric wind and temperature measurements made from NASA's high-altitude ER-2 research aircraft during the CRYSTAL-FACE campaign in July 2002 were analyzed to retrieve information on small scale gravity waves (GWs) at the aircraft's flight level (typically approximately 20 km altitude). For a given flight segment, the S-transform (a Gaussian wavelet transform) was used to search for and identify small horizontal scale GW events, and to estimate their apparent horizontal wavelengths. The horizontal propagation directions of the events were determined using the Stokes parameter method combined with the cross S-transform analysis. The vertical temperature gradient was used to determine the vertical wavelengths of the events. GW momentum fluxes were calculated from the cross S-transform. Other wave parameters such as intrinsic frequencies were calculated using the GW dispersion relation. More than 100GW events were identified. They were generally high frequency waves with vertical wavelength of approximately 5 km and horizontal wavelength generally shorter than 20 km. Their intrinsic propagation directions were predominantly toward the east, whereas their ground-based propagation directions were primarily toward the west. Among the events, approximately 20% of them had very short horizontal wavelength, very high intrinsic frequency, and relatively small momentum fluxes, and thus they were likely trapped in the lower stratosphere. Using the estimated GW parameters and the background winds and stabilities from the NCAR/NCEP reanalysis data, we were able to trace the sources of the events using a simple reverse ray-tracing. More than 70% of the events were traced back to convective sources in the troposphere, and the sources were generally located upstream of the locations of the events observed at the aircraft level. Finally, a probability density function of the reversible cooling rate due to GWs was obtained in this study, which may be useful for cirrus cloud models.

  3. Evapotranspiration Measurement and Estimation: Weighing Lysimeter and Neutron Probe Based Methods Compared with Eddy Covariance

    NASA Astrophysics Data System (ADS)

    Evett, S. R.; Gowda, P. H.; Marek, G. W.; Alfieri, J. G.; Kustas, W. P.; Brauer, D. K.

    2014-12-01

    Evapotranspiration (ET) may be measured by mass balance methods and estimated by flux sensing methods. The mass balance methods are typically restricted in terms of the area that can be represented (e.g., surface area of weighing lysimeter (LYS) or equivalent representative area of neutron probe (NP) and soil core sampling techniques), and can be biased with respect to ET from the surrounding area. The area represented by flux sensing methods such as eddy covariance (EC) is typically estimated with a flux footprint/source area model. The dimension, position of, and relative contribution of upwind areas within the source area are mainly influenced by sensor height, wind speed, atmospheric stability and wind direction. Footprints for EC sensors positioned several meters above the canopy are often larger than can be economically covered by mass balance methods. Moreover, footprints move with atmospheric conditions and wind direction to cover different field areas over time while mass balance methods are static in space. Thus, EC systems typically sample a much greater field area over time compared with mass balance methods. Spatial variability of surface cover can thus complicate interpretation of flux estimates from EC systems. The most commonly used flux estimation method is EC; and EC estimates of latent heat energy (representing ET) and sensible heat fluxes combined are typically smaller than the available energy from net radiation and soil heat flux (commonly referred to as lack of energy balance closure). Reasons for this are the subject of ongoing research. We compare ET from LYS, NP and EC methods applied to field crops for three years at Bushland, Texas (35° 11' N, 102° 06' W, 1170 m elevation above MSL) to illustrate the potential problems with and comparative advantages of all three methods. In particular, we examine how networks of neutron probe access tubes can be representative of field areas large enough to be equivalent in size to EC footprints, and how the ET data from these methods can address bias and accuracy issues.

  4. Searches for correlation between UHECR events and high-energy gamma-ray Fermi-LAT data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Álvarez, Ezequiel; Cuoco, Alessandro; Mirabal, Nestor

    The astrophysical sources responsible for ultra high-energy cosmic rays (UHECRs) continue to be one of the most intriguing mysteries in astrophysics. We present a comprehensive search for correlations between high-energy (∼> 1 GeV) gamma-ray events from the Fermi Large Area Telescope (LAT) and UHECRs (∼> 60 EeV) detected by the Telescope Array and the Pierre Auger Observatory. We perform two separate searches. First, we conduct a standard cross-correlation analysis between the arrival directions of 148 UHECRs and 360 gamma-ray sources in the Second Catalog of Hard Fermi-LAT sources (2FHL). Second, we search for a possible correlation between UHECR directions andmore » unresolved Fermi -LAT gamma-ray emission. For the latter, we use three different methods: a stacking technique with both a model-dependent and model-independent background estimate, and a cross-correlation function analysis. We also test for statistically significant excesses in gamma rays from signal regions centered on Cen A and the Telescope Array hotspot. No significant correlation is found in any of the analyses performed, except a weak (∼< 2σ) hint of signal with the correlation function method on scales ∼ 1°. Upper limits on the flux of possible power-law gamma-ray sources of UHECRs are derived.« less

  5. Measurement of the Local Food Environment: A Comparison of Existing Data Sources

    PubMed Central

    Bader, Michael D. M.; Ailshire, Jennifer A.; Morenoff, Jeffrey D.; House, James S.

    2010-01-01

    Studying the relation between the residential environment and health requires valid, reliable, and cost-effective methods to collect data on residential environments. This 2002 study compared the level of agreement between measures of the presence of neighborhood businesses drawn from 2 common sources of data used for research on the built environment and health: listings of businesses from commercial databases and direct observations of city blocks by raters. Kappa statistics were calculated for 6 types of businesses—drugstores, liquor stores, bars, convenience stores, restaurants, and grocers—located on 1,663 city blocks in Chicago, Illinois. Logistic regressions estimated whether disagreement between measurement methods was systematically correlated with the socioeconomic and demographic characteristics of neighborhoods. Levels of agreement between the 2 sources were relatively high, with significant (P < 0.001) kappa statistics for each business type ranging from 0.32 to 0.70. Most business types were more likely to be reported by direct observations than in the commercial database listings. Disagreement between the 2 sources was not significantly correlated with the socioeconomic and demographic characteristics of neighborhoods. Results suggest that researchers should have reasonable confidence using whichever method (or combination of methods) is most cost-effective and theoretically appropriate for their research design. PMID:20123688

  6. Gridded anthropogenic emissions inventory and atmospheric transport of carbonyl sulfide in the U.S.

    NASA Astrophysics Data System (ADS)

    Zumkehr, Andrew; Hilton, Timothy W.; Whelan, Mary; Smith, Steve; Campbell, J. Elliott

    2017-02-01

    Carbonyl sulfide (COS or OCS), the most abundant sulfur-containing gas in the troposphere, has recently emerged as a potentially important atmospheric tracer for the carbon cycle. Atmospheric inverse modeling studies may be able to use existing tower, airborne, and satellite observations of COS to infer information about photosynthesis. However, such analysis relies on gridded anthropogenic COS source estimates that are largely based on industry activity data from over three decades ago. Here we use updated emission factor data and industry activity data to develop a gridded inventory with a 0.1° resolution for the U.S. domain. The inventory includes the primary anthropogenic COS sources including direct emissions from the coal and aluminum industries as well as indirect sources from industrial carbon disulfide emissions. Compared to the previously published inventory, we found that the total anthropogenic source (direct and indirect) is 47% smaller. Using this new gridded inventory to drive the Sulfur Transport and Deposition Model/Weather Research and Forecasting atmospheric transport model, we found that the anthropogenic contribution to COS variation in the troposphere is small relative to the biosphere influence, which is encouraging for carbon cycle applications in this region. Additional anthropogenic sectors with highly uncertain emission factors require further field measurements.

  7. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach

    PubMed Central

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A.; Zhang, Wenbo

    2016-01-01

    Objective Combined source imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a non-invasive fashion. Source imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source imaging algorithms to both find the network nodes (regions of interest) and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Methods Source imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from inter-ictal and ictal signals recorded by EEG and/or MEG. Results Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ~20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Conclusion Our study indicates that combined source imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). Significance The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions. PMID:27740473

  8. Noninvasive Electromagnetic Source Imaging and Granger Causality Analysis: An Electrophysiological Connectome (eConnectome) Approach.

    PubMed

    Sohrabpour, Abbas; Ye, Shuai; Worrell, Gregory A; Zhang, Wenbo; He, Bin

    2016-12-01

    Combined source-imaging techniques and directional connectivity analysis can provide useful information about the underlying brain networks in a noninvasive fashion. Source-imaging techniques have been used successfully to either determine the source of activity or to extract source time-courses for Granger causality analysis, previously. In this work, we utilize source-imaging algorithms to both find the network nodes [regions of interest (ROI)] and then extract the activation time series for further Granger causality analysis. The aim of this work is to find network nodes objectively from noninvasive electromagnetic signals, extract activation time-courses, and apply Granger analysis on the extracted series to study brain networks under realistic conditions. Source-imaging methods are used to identify network nodes and extract time-courses and then Granger causality analysis is applied to delineate the directional functional connectivity of underlying brain networks. Computer simulations studies where the underlying network (nodes and connectivity pattern) is known were performed; additionally, this approach has been evaluated in partial epilepsy patients to study epilepsy networks from interictal and ictal signals recorded by EEG and/or Magnetoencephalography (MEG). Localization errors of network nodes are less than 5 mm and normalized connectivity errors of ∼20% in estimating underlying brain networks in simulation studies. Additionally, two focal epilepsy patients were studied and the identified nodes driving the epileptic network were concordant with clinical findings from intracranial recordings or surgical resection. Our study indicates that combined source-imaging algorithms with Granger causality analysis can identify underlying networks precisely (both in terms of network nodes location and internodal connectivity). The combined source imaging and Granger analysis technique is an effective tool for studying normal or pathological brain conditions.

  9. A 10{sup 9} neutrons/pulse transportable pulsed D-D neutron source based on flexible head plasma focus unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niranjan, Ram, E-mail: niranjan@barc.gov.in; Rout, R. K.; Srivastava, R.

    2016-03-15

    A 17 kJ transportable plasma focus (PF) device with flexible transmission lines is developed and is characterized. Six custom made capacitors are used for the capacitor bank (CB). The common high voltage plate of the CB is fixed to a centrally triggered spark gap switch. The output of the switch is coupled to the PF head through forty-eight 5 m long RG213 cables. The CB has a quarter time-period of 4 μs and an estimated current of 506 kA is delivered to the PF device at 17 kJ (60 μF, 24 kV) energy. The average neutron yield measured using silvermore » activation detector in the radial direction is (7.1 ± 1.4) × 10{sup 8} neutrons/shot over 4π sr at 5 mbar optimum D{sub 2} pressure. The average neutron yield is more in the axial direction with an anisotropy factor of 1.33 ± 0.18. The average neutron energies estimated in the axial as well as in the radial directions are (2.90 ± 0.20) MeV and (2.58 ± 0.20) MeV, respectively. The flexibility of the PF head makes it useful for many applications where the source orientation and the location are important factors. The influence of electromagnetic interferences from the CB as well as from the spark gap on applications area can be avoided by putting a suitable barrier between the bank and the PF head.« less

  10. [Macroeconomic costs of eye diseases].

    PubMed

    Hirneiß, C; Kampik, A; Neubauer, A S

    2014-05-01

    Eye diseases that are relevant regarding their macroeconomic costs and their impact on society include cataract, diabetic retinopathy, age-related maculopathy, glaucoma and refractive errors. The aim of this article is to provide a comprehensive overview of direct and indirect costs for major eye disease categories for Germany, based on existing literature and data sources. A semi-structured literature search was performed in the databases Medline and Embase and in the search machine Google for relevant original papers and reviews on costs of eye diseases with relevance for or transferability to Germany (last research date October 2013). In addition, manual searching was performed in important national databases and information sources, such as the Federal Office of Statistics and scientific societies. The direct costs for these diseases add up to approximately 2.6 billion Euros yearly for the Federal Republic of Germany, including out of the pocket payments from patients but excluding optical aids (e.g. glasses). In addition to those direct costs there are also indirect costs which are caused e.g. by loss of employment or productivity or by a reduction in health-related quality of life. These indirect costs can only be roughly estimated. Including the indirect costs for the eye diseases investigated, a total yearly macroeconomic cost ranging between 4 and 12 billion Euros is estimated for Germany. The costs for the eye diseases cataract, diabetic retinopathy, age-related maculopathy, glaucoma and refractive errors have a macroeconomic relevant dimension. Based on the predicted demographic changes with an ageing society an increase of the prevalence and thus also an increase of costs for eye diseases is expected in the future.

  11. Is there differential responsiveness to a future cigarette price increase depending on adolescents' source of cigarette access?

    PubMed

    Hwang, Jun Hyun; Park, Soon-Woo

    2017-06-01

    We examined whether the responsiveness to an increase in cigarettes price differed by adolescents' cigarette acquisition source. We analyzed data on 6134 youth smokers (grades 7-12) from a cross-sectional survey in Korea with national representativeness. The respondents were classified into one of the following according to their source of cigarette acquisition: commercial-source group, social-source group, and others. Multiple logistic regressions were performed to estimate the effects of an increase in cigarette price on the intention to quit smoking on the basis of the cigarette acquisition source. Of the 6134 youth smokers, 36.0% acquired cigarettes from social sources, compared to the 49.6% who purchased cigarettes directly from commercial sources. In response to a future cigarette price increase, regardless of an individual's smoking level, there was no statistically significant difference in the odds ratio for the intention to stop smoking in association with cigarette acquisition sources. The social-source group had nonsignificant, but consistently positive, odds ratios (1.07-1.30) as compared to that of the commercial-source group. Our findings indicate that the cigarette acquisition source does not affect the responsiveness to an increase in cigarette price. Therefore, a cigarette price policy is a comprehensive strategy to reduce smoking among youth smokers, regardless of their source.

  12. Frequency-dependent effects of rupture for the 2004 Parkfield mainshock, results from UPSAR

    USGS Publications Warehouse

    Fletcher, Jon B.

    2014-01-01

    The frequency-dependent effects of rupture propagation of the Parkfield, California earthquake (Sept. 28, 2004, M6) to the northwest along the San Andreas fault can be seen in acceleration records at UPSAR (USGS Parkfield Seismic Array) in at least two ways. First, we can see the effects of directivity in the acceleration traces at UPSAR, which is about 11.5 km from the epicenter. Directivity or the seismic equivalent of a Doppler shift has been documented in many cases by comparing short duration, high-amplitude pulses (P or S) in the forward direction with longer duration body waves in the backward direction. In this case we detect a change from a relatively large amplitude, coherent, high-frequency signal at the start of rupture to a low-amplitude, low-coherent, low-frequency signal at about the time the rupture front transfers from the forward azimuth to the back azimuth at about 34-36 s (time is UTC and are the seconds after day 272 and 17 hours and 15 minutes. S arrival is just after 30s) for rays leaving the fault and propagating to UPSAR. The frequency change is obvious in the band about 5 to 30 Hz, which is significantly above the corner frequency of the earthquake (about 0.11Hz). From kinematic source models, the duration of faulting is about 9.2 s and the change in frequency is during faulting as the rupture extends to the northwest. Understanding the systematic change in frequency and amplitude of seismic waves in relation to the propagation of the rupture front is important for predicting strong ground motion. Second, we can filter the acceleration records from the array to determine if the low frequency energy emerges from the same part of the fault as the high frequency signal (e.g. has the same back azimuth and apparent velocity at UPSAR) an important clue to the dynamics of rupture. Analysis of sources of strong motion (characterized by relatively high frequencies) compared to kinematic slip models (relatively low frequency) for the March 11, 2011 Tohoku earthquake as well as Maule (Feb. 27, 2010) and Chi-Chi (Sept. 20, 1999) earthquakes show that high- and low-frequency sources do not have the same locations on the fault. In this paper we filter the accelerograms from UPSAR for the 2004 mainshock in various passbands and then re-compute the cross correlations to determine the vector slowness of the incoming waves. At Parkfield, it appears that for seismic waves with frequencies above 1 Hz there is no discernible frequency-dependent difference in source position (up to 8 Hz) based on estimates of back azimuth and apparent velocity. However at lower frequencies, sources appear to be from shallower depths and trail the high frequencies as the rupture proceeds down the fault. This result is greater than one standard deviation of an estimate of error, based on a new method of estimating error that is a measure of how broad the peak in correlation is and an estimate of the variance of the correlation values. These observations can be understood in terms of a rupture front that is more energetic and coherent near the front of rupture (radiating higher frequencies) and less coherent and less energetic (radiating in a lower frequency band) behind the initial rupture front. This result is a qualitative assessment of changes in azimuth and apparent velocity with frequency and time and does not include corrections to find the source location on the fault.

  13. Do oceanic emissions account for the missing source of atmospheric carbonyl sulfide?

    NASA Astrophysics Data System (ADS)

    Lennartz, Sinikka; Marandino, Christa A.; von Hobe, Marc; Cortés, Pau; Simó, Rafel; Booge, Dennis; Quack, Birgit; Röttgers, Rüdiger; Ksionzek, Kerstin; Koch, Boris P.; Bracher, Astrid; Krüger, Kirstin

    2016-04-01

    Carbonyl sulfide (OCS) has a large potential to constrain terrestrial gross primary production (GPP), one of the largest carbon fluxes in the carbon cycle, as it is taken up by plants in a similar way as CO2. To estimate GPP in a global approach, the magnitude and seasonality of sources and sinks of atmospheric OCS have to be well understood, to distinguish between seasonal variation caused by vegetation uptake and other sources or sinks. However, the atmospheric budget is currently highly uncertain, and especially the oceanic source strength is debated. Recent studies suggest that a missing source of several hundreds of Gg sulfur per year is located in the tropical ocean by a top-down approach. Here, we present highly-resolved OCS measurements from two cruises to the tropical Pacific and Indian Ocean as a bottom-up approach. The results from these cruises show that opposite to the assumed ocean source, direct emissions of OCS from the tropical ocean are unlikely to account for the missing source. To reduce uncertainty in the global oceanic emission estimate, our understanding of the production and consumption processes of OCS and its precursors, dimethylsulfide (DMS) and carbon disulphide (CS2), needs improvement. Therefore, we investigate the influence of dissolved organic matter (DOM) on the photochemical production of OCS in seawater by considering analysis of the composition of DOM from the two cruises. Additionally, we discuss the potential of oceanic emissions of DMS and CS2 to closing the atmospheric OCS budget. Especially the production and consumption processes of CS2 in the surface ocean are not well known, thus we evaluate possible photochemical or biological sources by analyzing its covariation of biological and photochemical parameters.

  14. Numerical simulations of the hard X-ray pulse intensity distribution at the Linac Coherent Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pardini, Tom; Aquila, Andrew; Boutet, Sebastien

    Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less

  15. Numerical simulations of the hard X-ray pulse intensity distribution at the Linac Coherent Light Source

    DOE PAGES

    Pardini, Tom; Aquila, Andrew; Boutet, Sebastien; ...

    2017-06-15

    Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less

  16. Improved Overpressure Recording and Modeling for Near-Surface Explosion Forensics

    NASA Astrophysics Data System (ADS)

    Kim, K.; Schnurr, J.; Garces, M. A.; Rodgers, A. J.

    2017-12-01

    The accurate recording and analysis of air-blast acoustic waveforms is a key component of the forensic analysis of explosive events. Smartphone apps can enhance traditional technologies by providing scalable, cost-effective ubiquitous sensor solutions for monitoring blasts, undeclared activities, and inaccessible facilities. During a series of near-surface chemical high explosive tests, iPhone 6's running the RedVox infrasound recorder app were co-located with high-fidelity Hyperion overpressure sensors, allowing for direct comparison of the resolution and frequency content of the devices. Data from the traditional sensors is used to characterize blast signatures and to determine relative iPhone microphone amplitude and phase responses. A Wiener filter based source deconvolution method is applied, using a parameterized source function estimated from traditional overpressure sensor data, to estimate system responses. In addition, progress on a new parameterized air-blast model is presented. The model is based on the analysis of a large set of overpressure waveforms from several surface explosion test series. An appropriate functional form with parameters determined empirically from modern air-blast and acoustic data will allow for better parameterization of signals and the improved characterization of explosive sources.

  17. Modeling the utility of binaural cues for underwater sound localization.

    PubMed

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Substantial large-scale feedbacks between natural aerosols and climate

    NASA Astrophysics Data System (ADS)

    Scott, C. E.; Arnold, S. R.; Monks, S. A.; Asmi, A.; Paasonen, P.; Spracklen, D. V.

    2018-01-01

    The terrestrial biosphere is an important source of natural aerosol. Natural aerosol sources alter climate, but are also strongly controlled by climate, leading to the potential for natural aerosol-climate feedbacks. Here we use a global aerosol model to make an assessment of terrestrial natural aerosol-climate feedbacks, constrained by observations of aerosol number. We find that warmer-than-average temperatures are associated with higher-than-average number concentrations of large (>100 nm diameter) particles, particularly during the summer. This relationship is well reproduced by the model and is driven by both meteorological variability and variability in natural aerosol from biogenic and landscape fire sources. We find that the calculated extratropical annual mean aerosol radiative effect (both direct and indirect) is negatively related to the observed global temperature anomaly, and is driven by a positive relationship between temperature and the emission of natural aerosol. The extratropical aerosol-climate feedback is estimated to be -0.14 W m-2 K-1 for landscape fire aerosol, greater than the -0.03 W m-2 K-1 estimated for biogenic secondary organic aerosol. These feedbacks are comparable in magnitude to other biogeochemical feedbacks, highlighting the need for natural aerosol feedbacks to be included in climate simulations.

  19. Exploring the observational constraints on the simulation of brown carbon

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Heald, Colette L.; Liu, Jiumeng; Weber, Rodney J.; Campuzano-Jost, Pedro; Jimenez, Jose L.; Schwarz, Joshua P.; Perring, Anne E.

    2018-01-01

    Organic aerosols (OA) that strongly absorb solar radiation in the near-UV are referred to as brown carbon (BrC). The sources, evolution, and optical properties of BrC remain highly uncertain and contribute significantly to uncertainty in the estimate of the global direct radiative effect (DRE) of aerosols. Previous modeling studies of BrC optical properties and DRE have been unable to fully evaluate model performance due to the lack of direct measurements of BrC absorption. In this study, we develop a global model simulation (GEOS-Chem) of BrC and test it against BrC absorption measurements from two aircraft campaigns in the continental US (SEAC4RS and DC3). To the best of our knowledge, this is the first study to compare simulated BrC absorption with direct aircraft measurements. We show that BrC absorption properties estimated based on previous laboratory measurements agree with the aircraft measurements of freshly emitted BrC absorption but overestimate aged BrC absorption. In addition, applying a photochemical scheme to simulate bleaching/degradation of BrC improves model skill. The airborne observations are therefore consistent with a mass absorption coefficient (MAC) of freshly emitted biomass burning OA of 1.33 m2 g-1 at 365 nm coupled with a 1-day whitening e-folding time. Using the GEOS-Chem chemical transport model integrated with the RRTMG radiative transfer model, we estimate that the top-of-the-atmosphere all-sky direct radiative effect (DRE) of OA is -0.344 Wm-2, 10 % higher than that without consideration of BrC absorption. Therefore, our best estimate of the absorption DRE of BrC is +0.048 Wm-2. We suggest that the DRE of BrC has been overestimated previously due to the lack of observational constraints from direct measurements and omission of the effects of photochemical whitening.

  20. [Comparison of Google and Yahoo applications for geocoding of postal addresses in epidemiological studies].

    PubMed

    Quesada, Jose Antonio; Nolasco, Andreu; Moncho, Joaquín

    2013-01-01

    Geocoding is the assignment of geographic coordinates to spatial points, which often are postal addresses. The error made in applying this process can introduce bias in estimates of spatiotemporal models in epidemiological studies. No studies have been found to measure the error made in applying this process in Spanish cities. The objective is to evaluate the errors in magnitude and direction from two free sources (Google and Yahoo) with regard to a GPS in two Spanish cities. 30 addresses were geocoded with those two sources and the GPS in Santa Pola (Alicante) and Alicante city. The distances were calculated in metres (median, CI95%) between the sources and the GPS, globally and according to the status reported by each source. The directionality of the error was evaluated by calculating the location quadrant and applying a Chi-Square test. The GPS error was evaluated by geocoding 11 addresses twice at 4 days interval. The overall median in Google-GPS was 23,2 metres (16,0-32,1) for Santa Pola, and 21,4 meters (14,9-31,1) for Alicante. The overall median in Yahoo was 136,0 meters (19,2-318,5) for Santa Pola, and 23,8 meters (13,6- 29,2) for Alicante. Between the 73% and 90% were geocoded by status as "exact or interpolated" (minor error), where Goggle and Yahoo had a median error between 19 and 23 metres in the two cities. The GPS had a median error of 13.8 meters (6,7-17,8). No error directionality was detected. Google error is acceptable and stable in the two cities, so that it is a reliable source for Para medir elgeocoding addresses in Spain in epidemiological studies.

  1. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  2. SOME APPLICATIONS OF SEISMIC SOURCE MECHANISM STUDIES TO ASSESSING UNDERGROUND HAZARD.

    USGS Publications Warehouse

    McGarr, A.; ,

    1984-01-01

    Various measures of the seismic source mechanism of mine tremors, such as magnitude, moment, stress drop, apparent stress, and seismic efficiency, can be related directly to several aspects of the problem of determining the underground hazard arising from strong ground motion of large seismic events. First, the relation between the sum of seismic moments of tremors and the volume of stope closure caused by mining during a given period can be used in conjunction with magnitude-frequency statistics and an empirical relation between moment and magnitude to estimate the maximum possible sized tremor for a given mining situation. Second, it is shown that the 'energy release rate,' a commonly-used parameter for predicting underground seismic hazard, may be misleading in that the importance of overburden stress, or depth, is overstated. Third, results involving the relation between peak velocity and magnitude, magnitude-frequency statistics, and the maximum possible magnitude are applied to the problem of estimating the frequency at which design limits of certain underground support equipment are likely to be exceeded.

  3. Total and non-seasalt sulfate and chloride measured in bulk precipitation samples from the Kilauea Volcano area, Hawaii

    USGS Publications Warehouse

    Scholl, M.A.; Ingebritsen, S.E.

    1995-01-01

    Six-month cumulative precipitation samples provide estimates of bulk deposition of sulfate and chloride for the southeast part of the Island of Hawaii during four time periods: August 1991 to February 1992, February 1992 to September 1992, March 1993 to September 1993, and September 1993 to February 1994. Total estimated bulk deposition rates for sulfate ranged from 0.12 to 24 grams per square meter per 180 days, and non-seasalt sulfate deposition ranged from 0.06 to 24 grams per square meter per 180 days. Patterns of non-seasalt sulfate deposition were generally related to prevailing wind directions and the proximity of the collection site to large sources of sulfur gases, namely Kilauea Volcano's summit and East Rift Zone eruption. Total chloride deposition from bulk precipitation samples ranged from 0.01 to 17 grams per square meter per 180 days. Chloride appeared to be predominantly from oceanic sources, as non- seasalt chloride deposition was near zero for most sites.

  4. A comparison of skyshine computational methods.

    PubMed

    Hertel, Nolan E; Sweezy, Jeremy E; Shultis, J Kenneth; Warkentin, J Karl; Rose, Zachary J

    2005-01-01

    A variety of methods employing radiation transport and point-kernel codes have been used to model two skyshine problems. The first problem is a 1 MeV point source of photons on the surface of the earth inside a 2 m tall and 1 m radius silo having black walls. The skyshine radiation downfield from the point source was estimated with and without a 30-cm-thick concrete lid on the silo. The second benchmark problem is to estimate the skyshine radiation downfield from 12 cylindrical canisters emplaced in a low-level radioactive waste trench. The canisters are filled with ion-exchange resin with a representative radionuclide loading, largely 60Co, 134Cs and 137Cs. The solution methods include use of the MCNP code to solve the problem by directly employing variance reduction techniques, the single-scatter point kernel code GGG-GP, the QADMOD-GP point kernel code, the COHORT Monte Carlo code, the NAC International version of the SKYSHINE-III code, the KSU hybrid method and the associated KSU skyshine codes.

  5. High-resolution sampling and analysis of ambient particulate matter in the Pearl River Delta region of southern China: source apportionment and health risk implications

    NASA Astrophysics Data System (ADS)

    Zhou, Shengzhen; Davy, Perry K.; Huang, Minjuan; Duan, Jingbo; Wang, Xuemei; Fan, Qi; Chang, Ming; Liu, Yiming; Chen, Weihua; Xie, Shanju; Ancelet, Travis; Trompetter, William J.

    2018-02-01

    Hazardous air pollutants, such as trace elements in particulate matter (PM), are known or highly suspected to cause detrimental effects on human health. To understand the sources and associated risks of PM to human health, hourly time-integrated major trace elements in size-segregated coarse (PM2.5-10) and fine (PM2.5) particulate matter were collected at the industrial city of Foshan in the Pearl River Delta region, China. Receptor modeling of the data set by positive matrix factorization (PMF) was used to identify six sources contributing to PM2.5 and PM10 concentrations at the site. Dominant sources included industrial coal combustion, secondary inorganic aerosol, motor vehicles and construction dust along with two intermittent sources (biomass combustion and marine aerosol). The biomass combustion source was found to be a significant contributor to peak PM2.5 episodes along with motor vehicles and industrial coal combustion. Conditional probability function (CPF) analysis was applied to estimate the source locations using the PMF-resolved source contribution coupled with the surface wind direction data. Health exposure risk of hazardous trace elements (Pb, As, Si, Cr, Mn and Ni) and source-specific values were estimated. The total hazard quotient (HQ) of PM2.5 was 2.09, higher than the acceptable limit (HQ = 1). The total carcinogenic risk (CR) was 3.37 × 10-3 for PM2.5, which was 3 times higher than the least stringent limit (1.0 × 10-4). Among the selected trace elements, As and Pb posed the highest non-carcinogenic and carcinogenic risks to human health, respectively. In addition, our results show that the industrial coal combustion source is the dominant non-carcinogenic and carcinogenic risk contributor, highlighting the need for stringent control of this source. This study provides new insight for policy makers to prioritize sources in air quality management and health risk reduction.

  6. Using Model Comparisons to Understand Sources of Nitrogen Delivered to US Coastal Areas

    NASA Astrophysics Data System (ADS)

    McCrackin, M. L.; Harrison, J.; Compton, J. E.

    2011-12-01

    Nitrogen loading to water bodies can result in eutrophication-related hypoxia and degraded water quality. The relative contributions of different anthropogenic and natural sources of in-stream N cannot be directly measured at whole-watershed scales; hence, N source attribution estimates at scales beyond a small catchment must rely on models. Although such estimates have been accomplished using individual N loading models, there has not yet been a comparison of source attribution by multiple regional- and continental-scale models. We compared results from two models applied at large spatial scales: Nutrient Export from WatershedS (NEWS) and SPAtially Referenced Regressions On Watersheds (SPARROW). Despite widely divergent approaches to source attribution, NEWS and SPARROW identified the same dominant sources of N for 65% of the modeled drainage area of the continental US. Human activities accounted for over two-thirds of N delivered to the coastal zone. Regionally, the single largest sources of N predicted by both models reflect land-use patterns across the country. Sewage was an important source in densely populated regions along the east and west coasts of the US. Fertilizer and livestock manure were dominant in the Mississippi River Basin, where the bulk of agricultural areas are located. Run-off from undeveloped areas was the largest source of N delivered to coastal areas in the northwestern US. Our analysis shows that comparisons of source apportionment between models can increase confidence in modeled output by revealing areas of agreement and disagreement. We found predictions for agriculture and atmospheric deposition to be comparable between models; however, attribution to sewage was greater by SPARROW than by NEWS, while the reverse was true for natural N sources. Such differences in predictions resulted from differences in model structure and sources of input data. Nonetheless, model comparisons provide strong evidence that anthropogenic activities have a profound effect on N delivered to coastal areas of the US, especially along the Atlantic coast and Gulf of Mexico.

  7. Estimation of low-level neutron dose-equivalent rate by using extrapolation method for a curie level Am-Be neutron source.

    PubMed

    Li, Gang; Xu, Jiayun; Zhang, Jie

    2015-01-01

    Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a 3 He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3 He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A Fuel-Based Assessment of On-Road and Off-Road Mobile Source Emission Trends

    NASA Astrophysics Data System (ADS)

    Dallmann, T. R.; Harley, R. A.

    2009-12-01

    Mobile sources contribute significantly to emissions of nitrogen oxides (NOx) and fine particulate matter (PM2.5) in the United States. These emissions lead to a variety of environmental concerns including adverse human health effects and climate change. In the electric power sector, sulfur dioxide (SO2) and NOx emissions from power plants are measured directly using continuous emission monitoring systems. In contrast for mobile sources, statistical models are used to estimate average emissions from a very large and diverse population of engines. Despite much effort aimed at improving them, mobile source emission inventories continue to have large associated uncertainties. Alternate methods are needed to help evaluate estimates of mobile source emissions and quantify and reduce the associated uncertainties. In this study, a fuel-based approach is used to estimate emissions from mobile sources, including on-road and off-road gasoline and diesel engines. In this approach, engine activity is measured by fuel consumed (in contrast EPA mobile source emission models are based on vehicle km of travel and total amount of engine work output for on-road and off-road engines, respectively). Fuel consumption is defined in this study based on highway fuel tax reports for on-road engines, and from surveys of fuel wholesalers who sell tax-exempt diesel fuel for use in various off-road sectors such as agriculture, construction, and mining. Over the decade-long time period (1996-2006) that is the focus of the present study, national sales of taxable gasoline and diesel fuel intended for on-road use increased by 15 and 43%, respectively. Diesel fuel use by off-road equipment increased by about 20% over the same time period. Growth in fuel consumption offset some of the reductions in pollutant emission factors that occurred during this period. This study relies on in-use measurements of mobile source emission factors, for example from roadside and tunnel studies, remote sensing, and plume capture experiments. Extensive in-use emissions data are available for NOx, especially for on-road engines. Measurements of exhaust PM2.5 emission factors are sparse in comparison. For NOx, there have been dramatic (factor of 2) decreases in emission factors for on-road gasoline engines between 1996 and 2006, due to use of improved catalytic converters on most engines. In contrast, diesel NOx emission factors decreased more gradually over the same time period. Exhaust PM2.5 emission factors appear to have decreased for most engine categories, but emission uncertainties are large for this pollutant. Pollutant emissions were estimated by combining fuel sales with emission factors expressed per unit of fuel burned. Diesel engines are the dominant mobile source of both NOx and PM2.5; the diesel contribution to NOx has increased over time as gasoline engine emissions have declined. Comparing fuel-based emission estimates with EPA’s national emission inventory led to the following conclusions: (1) total emissions of both NOx and PM2.5 estimated by two different methods were similar, (2) the distribution of source contributions to these totals differ significantly, with higher relative contributions coming from on-road diesel engines in this study compared to EPA.

  9. Estimating the Earthquake Source Time Function by Markov Chain Monte Carlo Sampling

    NASA Astrophysics Data System (ADS)

    Dȩbski, Wojciech

    2008-07-01

    Many aspects of earthquake source dynamics like dynamic stress drop, rupture velocity and directivity, etc. are currently inferred from the source time functions obtained by a deconvolution of the propagation and recording effects from seismograms. The question of the accuracy of obtained results remains open. In this paper we address this issue by considering two aspects of the source time function deconvolution. First, we propose a new pseudo-spectral parameterization of the sought function which explicitly takes into account the physical constraints imposed on the sought functions. Such parameterization automatically excludes non-physical solutions and so improves the stability and uniqueness of the deconvolution. Secondly, we demonstrate that the Bayesian approach to the inverse problem at hand, combined with an efficient Markov Chain Monte Carlo sampling technique, is a method which allows efficient estimation of the source time function uncertainties. The key point of the approach is the description of the solution of the inverse problem by the a posteriori probability density function constructed according to the Bayesian (probabilistic) theory. Next, the Markov Chain Monte Carlo sampling technique is used to sample this function so the statistical estimator of a posteriori errors can be easily obtained with minimal additional computational effort with respect to modern inversion (optimization) algorithms. The methodological considerations are illustrated by a case study of the mining-induced seismic event of the magnitude M L ≈3.1 that occurred at Rudna (Poland) copper mine. The seismic P-wave records were inverted for the source time functions, using the proposed algorithm and the empirical Green function technique to approximate Green functions. The obtained solutions seem to suggest some complexity of the rupture process with double pulses of energy release. However, the error analysis shows that the hypothesis of source complexity is not justified at the 95% confidence level. On the basis of the analyzed event we also show that the separation of the source inversion into two steps introduces limitations on the completeness of the a posteriori error analysis.

  10. Direct atmospheric deposition of 210Pb to rivers determined through 210Pb (210Po) disequilibrium and implications to sediment source apportionment

    NASA Astrophysics Data System (ADS)

    Blumentritt, D. J.; Shottler, S.; Engstrom, D. R.

    2011-12-01

    Atmospheric radioisotopes such as 210Pb can be an effective tool for determining sediment source types in rivers and streams. Pb-210 is ubiquitous in deposition from atmospheric washout and is highly particle reactive, so sediments derived from a surface with prolonged exposure to rainfall, such as farm fields, are enriched in atmospheric 210Pb. Conversely, sediment sources that are not readily exposed to rainfall (i.e. streambanks) are absent of any appreciable 210Pb. Many sediment source apportionment studies have used 210Pb to quantify the proportion of sediment loads from the two source types, field and non-field. These studies, however, primarily take place in smaller watersheds where 210Pb that falls directly onto the surface of the water is assumed negligible. Lake Pepin is a riverine lake located in southeastern Minnesota with a 122,000 km2 watershed composed of three major rivers, the Minnesota, headwaters Mississippi, and St. Croix. The sediment load in Lake Pepin has increased by an order of magnitude since Euro-American settlement in the region. Most of the sediment (>80%) is transported to Lake Pepin from the highly agricultural Minnesota River basin. Extensive sediment fingerprinting work has been done on Lake Pepin sediments, but a source of significant uncertainty still exists: How much of the 210Pb measured in Lake Pepin is directly deposited to the surface of the contributing water bodies and did not enter on eroded particles? To answer this important question, we have developed a method to quantify the amount of directly deposited 210Pb. Alpha spectrometry is used to measure 210Po, a daughter product of 210Pb decay. Because 210Po has a short half-life (138 days), it takes approximately one year to reach equilibrium with 210Pb on sediment particles. If deposition of 210Pb directly from the atmosphere to the water surface is significant, there will be disequilibrium between the two radioisotopes and the activity of 210Po will increase as an inverse exponential function, with respect to time, following sample collection. The magnitude of this increase is the amount of directly deposited 210Pb. Samples were collected at four locations, corresponding to gauging stations, on the Minnesota/Mississippi River system above Lake Pepin. Three measurements were made on each sample over the course of a year after collection. Based on those three measurements, the activity of 210Po was modeled for time zero and for the equilibrium concentration, revealing the amount of 210Pb from direct atmospheric deposition. These estimates were flow-weighted over the course of the year, providing a critical correction to the source apportionment model.

  11. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  12. Software risk management through independent verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Zhou, Tong C.; Wood, Ralph

    1995-01-01

    Software project managers need tools to estimate and track project goals in a continuous fashion before, during, and after development of a system. In addition, they need an ability to compare the current project status with past project profiles to validate management intuition, identify problems, and then direct appropriate resources to the sources of problems. This paper describes a measurement-based approach to calculating the risk inherent in meeting project goals that leverages past project metrics and existing estimation and tracking models. We introduce the IV&V Goal/Questions/Metrics model, explain its use in the software development life cycle, and describe our attempts to validate the model through the reverse engineering of existing projects.

  13. Sensitivity and systematics of calorimetric neutrino mass experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nucciotti, A.; Cremonesi, O.; Ferri, E.

    2009-12-16

    A large calorimetric neutrino mass experiment using thermal detectors is expected to play a crucial role in the challenge for directly assessing the neutrino mass. We discuss and compare here two approaches for the estimation of the experimental sensitivity of such an experiment. The first method uses an analytic formulation and allows to obtain readily a close estimate over a wide range of experimental configurations. The second method is based on a Montecarlo technique and is more precise and reliable. The Montecarlo approach is then exploited to study some sources of systematic uncertainties peculiar to calorimetric experiments. Finally, the toolsmore » are applied to investigate the optimal experimental configuration of the MARE project.« less

  14. Estimation of light source colours for light pollution assessment.

    PubMed

    Ziou, D; Kerouh, F

    2018-05-01

    The concept of the smart city raised several technological and scientific issues including light pollution. There are various negative impacts of light pollution on economy, ecology, and heath. This paper deals with the census of the colour of light emitted by lamps used in a city environment. To this end, we derive a light bulb colour estimator based on Bayesian reasoning, directional data, and image formation model in which the usual concept of reflectance is not used. All choices we made are devoted to designing an algorithm which can be run almost in real-time. Experimental results show the effectiveness of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Mineral Commodity Summaries 2008

    USGS Publications Warehouse

    ,

    2008-01-01

    Each chapter of the 2008 edition of the U.S. Geological Survey (USGS) Mineral Commodity Summaries (MCS) includes information on events, trends, and issues for each mineral commodity as well as discussions and tabular presentations on domestic industry structure, Government programs, tariffs, 5-year salient statistics, and world production and resources. The MCS is the earliest comprehensive source of 2007 mineral production data for the world. More than 90 individual minerals and materials are covered by two-page synopses. National reserves and reserve base information for most mineral commodities found in this report, including those for the United States, are derived from a variety of sources. The ideal source of such information would be comprehensive evaluations that apply the same criteria to deposits in different geographic areas and report the results by country. In the absence of such evaluations, national reserves and reserve base estimates compiled by countries for selected mineral commodities are a primary source of national reserves and reserve base information. Lacking national assessment information by governments, sources such as academic articles, company reports, common business practice, presentations by company representatives, and trade journal articles, or a combination of these, serve as the basis for national reserves and reserve base information reported in the mineral commodity sections of this publication. A national estimate may be assembled from the following: historically reported reserves and reserve base information carried for years without alteration because no new information is available; historically reported reserves and reserve base reduced by the amount of historical production; and company reported reserves. International minerals availability studies conducted by the U.S. Bureau of Mines, before 1996, and estimates of identified resources by an international collaborative effort (the International Strategic Minerals Inventory) are the basis for some reserves and reserve base estimates. The USGS collects information about the quantity and quality of mineral resources but does not directly measure reserves, and companies or governments do not directly report reserves or reserve base to the USGS. Reassessment of reserves and reserve base is a continuing process and the intensity of this process differs for mineral commodities, countries, and time period. Abbreviations and units of measure, and definitions of selected terms used in the report, are in Appendix A and Appendix B, respectively. A resource/reserve classification for minerals, based on USGS Circular 831 (published with the U.S. Bureau of Mines) is Appendix C, and a directory of USGS minerals information country specialists and their responsibilities is Appendix D. The USGS continually strives to improve the value of its publications to users. Constructive comments and suggestions by readers of the MCS 2008 are welcomed.

  16. Local reconstruction in computed tomography of diffraction enhanced imaging

    NASA Astrophysics Data System (ADS)

    Huang, Zhi-Feng; Zhang, Li; Kang, Ke-Jun; Chen, Zhi-Qiang; Zhu, Pei-Ping; Yuan, Qing-Xi; Huang, Wan-Xia

    2007-07-01

    Computed tomography of diffraction enhanced imaging (DEI-CT) based on synchrotron radiation source has extremely high sensitivity of weakly absorbing low-Z samples in medical and biological fields. The authors propose a modified backprojection filtration(BPF)-type algorithm based on PI-line segments to reconstruct region of interest from truncated refraction-angle projection data in DEI-CT. The distribution of refractive index decrement in the sample can be directly estimated from its reconstruction images, which has been proved by experiments at the Beijing Synchrotron Radiation Facility. The algorithm paves the way for local reconstruction of large-size samples by the use of DEI-CT with small field of view based on synchrotron radiation source.

  17. [Biotechnology's macroeconomic impact].

    PubMed

    Dones Tacero, Milagros; Pérez García, Julián; San Román, Antonio Pulido

    2008-12-01

    This paper tries to yield an economic valuation of biotechnological activities in terms of aggregated production and employment. This valuation goes beyond direct estimation and includes the indirect effects derived from sectorial linkages between biotechnological activities and the rest of economic system. To deal with the proposed target several sources of data have been used, including official data from National Statistical Office (INE) such us national accounts, input-output tables, and innovation surveys, as well as, firms' level balance sheets and income statements and also specific information about research projects compiled by Genoma Spain Foundation. Methodological approach is based on the estimation of a new input-output table which includes the biotechnological activities as a specific branch. This table offers both the direct impact of these activities and the main parameters to obtain the induced effects over the rest of the economic system. According to the most updated available figures, biotechnological activities would have directly generated almost 1,600 millions of euros in 2005, and they would be employed more than 9,000 workers. But if we take into account the full linkages with the rest of the system, the macroeconomic impact of Biotechnological activities would reach around 5,000 millions euros in production terms (0.6% of total GDP) and would be responsible, directly or indirectly, of more than 44,000 employments.

  18. Costs for Breast Cancer Care in the Military Health System: An Analysis by Benefit Type and Care Source.

    PubMed

    Eaglehouse, Yvonne L; Manjelievskaia, Janna; Shao, Stephanie; Brown, Derek; Hofmann, Keith; Richard, Patrick; Shriver, Craig D; Zhu, Kangmin

    2018-04-11

    Breast cancer care imposes a significant financial burden to U.S. healthcare systems. Health services factors, such as insurance benefit type and care source, may impact costs to the health system. Beneficiaries in the U.S. Military Health System (MHS) have universal healthcare coverage and access to a network of military facilities (direct care) and private practices (purchased care). This study aims to quantify and compare breast cancer care costs to the MHS by insurance benefit type and care source. We conducted a retrospective analysis of data linked between the MHS data repository administrative claims and central cancer registry databases. The institutional review boards of the Walter Reed National Military Medical Center, the Defense Health Agency, and the National Institutes of Health Office of Human Subjects Research reviewed and approved the data linkage. We used the linked data to identify records for women aged 40-64 yr who were diagnosed with breast cancer between 2003 and 2007 and to extract information on insurance benefit type, care source, and cost to the MHS for breast cancer treatment. We estimated per capita costs for breast cancer care by benefit type and care source in 2008 USD using generalized linear models, adjusted for demographic, pathologic, and treatment characteristics. The average per capita (n = 2,666) total cost for breast cancer care was $66,300 [standard error (SE) $9,200] over 3.31 (1.48) years of follow-up. Total costs were similar between benefit types, but varied by care source. The average per capita cost was $34,500 ($3,000) for direct care (n = 924), $96,800 ($4,800) for purchased care (n = 622), and $60,700 ($3,900) for both care sources (n = 1,120), respectively. Care source differences remained by tumor stage and for chemotherapy, radiation, and hormone therapy treatment types. Per capita costs to the MHS for breast cancer care were similar by benefit type and lower for direct care compared with purchased care. Further research is needed in breast and other tumor sites to determine patterns and determinants of cancer care costs between benefit types and care sources within the MHS.

  19. EEG neural correlates of goal-directed movement intention.

    PubMed

    Pereira, Joana; Ofner, Patrick; Schwarz, Andreas; Sburlea, Andreea Ioana; Müller-Putz, Gernot R

    2017-04-01

    Using low-frequency time-domain electroencephalographic (EEG) signals we show, for the same type of upper limb movement, that goal-directed movements have different neural correlates than movements without a particular goal. In a reach-and-touch task, we explored the differences in the movement-related cortical potentials (MRCPs) between goal-directed and non-goal-directed movements. We evaluated if the detection of movement intention was influenced by the goal-directedness of the movement. In a single-trial classification procedure we found that classification accuracies are enhanced if there is a goal-directed movement in mind. Furthermore, by using the classifier patterns and estimating the corresponding brain sources, we show the importance of motor areas and the additional involvement of the posterior parietal lobule in the discrimination between goal-directed movements and non-goal-directed movements. We discuss next the potential contribution of our results on goal-directed movements to a more reliable brain-computer interface (BCI) control that facilitates recovery in spinal-cord injured or stroke end-users. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Distant Speech Recognition Using a Microphone Array Network

    NASA Astrophysics Data System (ADS)

    Nakano, Alberto Yoshihiro; Nakagawa, Seiichi; Yamamoto, Kazumasa

    In this work, spatial information consisting of the position and orientation angle of an acoustic source is estimated by an artificial neural network (ANN). The estimated position of a speaker in an enclosed space is used to refine the estimated time delays for a delay-and-sum beamformer, thus enhancing the output signal. On the other hand, the orientation angle is used to restrict the lexicon used in the recognition phase, assuming that the speaker faces a particular direction while speaking. To compensate the effect of the transmission channel inside a short frame analysis window, a new cepstral mean normalization (CMN) method based on a Gaussian mixture model (GMM) is investigated and shows better performance than the conventional CMN for short utterances. The performance of the proposed method is evaluated through Japanese digit/command recognition experiments.

Top