Sample records for estimate source parameters

  1. A new software for deformation source optimization, the Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, H.; Dutta, R.; Jonsson, S.; Mai, P. M.

    2017-12-01

    Modern studies of crustal deformation and the related source estimation, including magmatic and tectonic sources, increasingly use non-linear optimization strategies to estimate geometric and/or kinematic source parameters and often consider both jointly, geodetic and seismic data. Bayesian inference is increasingly being used for estimating posterior distributions of deformation source model parameters, given measured/estimated/assumed data and model uncertainties. For instance, some studies consider uncertainties of a layered medium and propagate these into source parameter uncertainties, while others use informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed to efficiently explore the high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational burden of these methods is high and estimation codes are rarely made available along with the published results. Even if the codes are accessible, it is usually challenging to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in deformation source estimations, we undertook the effort of developing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package builds on the pyrocko seismological toolbox (www.pyrocko.org), and uses the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat), and we encourage and solicit contributions to the project. Here, we present our strategy for developing BEAT and show application examples; especially the effect of including the model prediction uncertainty of the velocity model in following source optimizations: full moment tensor, Mogi source, moderate strike-slip earth-quake.

  2. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  3. Flight parameter estimation using instantaneous frequency and direction of arrival measurements from a single acoustic sensor node.

    PubMed

    Lo, Kam W

    2017-03-01

    When an airborne sound source travels past a stationary ground-based acoustic sensor node in a straight line at constant altitude and constant speed that is not much less than the speed of sound in air, the movement of the source during the propagation of the signal from the source to the sensor node (commonly referred to as the "retardation effect") enables the full set of flight parameters of the source to be estimated by measuring the direction of arrival (DOA) of the signal at the sensor node over a sufficiently long period of time. This paper studies the possibility of using instantaneous frequency (IF) measurements from the sensor node to improve the precision of the flight parameter estimates when the source spectrum contains a harmonic line of constant frequency. A simplified Cramer-Rao lower bound analysis shows that the standard deviations in the estimates of the flight parameters can be reduced when IF measurements are used together with DOA measurements. Two flight parameter estimation algorithms that utilize both IF and DOA measurements are described and their performances are evaluated using both simulated data and real data.

  4. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  5. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  6. Photutils: Photometry tools

    NASA Astrophysics Data System (ADS)

    Bradley, Larry; Sipocz, Brigitta; Robitaille, Thomas; Tollerud, Erik; Deil, Christoph; Vinícius, Zè; Barbary, Kyle; Günther, Hans Moritz; Bostroem, Azalee; Droettboom, Michael; Bray, Erik; Bratholm, Lars Andersen; Pickering, T. E.; Craig, Matt; Pascual, Sergio; Greco, Johnny; Donath, Axel; Kerzendorf, Wolfgang; Littlefair, Stuart; Barentsen, Geert; D'Eugenio, Francesco; Weaver, Benjamin Alan

    2016-09-01

    Photutils provides tools for detecting and performing photometry of astronomical sources. It can estimate the background and background rms in astronomical images, detect sources in astronomical images, estimate morphological parameters of those sources (e.g., centroid and shape parameters), and perform aperture and PSF photometry. Written in Python, it is an affiliated package of Astropy (ascl:1304.002).

  7. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  8. Earthquake source parameters determined by the SAFOD Pilot Hole seismic array

    USGS Publications Warehouse

    Imanishi, K.; Ellsworth, W.L.; Prejean, S.G.

    2004-01-01

    We estimate the source parameters of #3 microearthquakes by jointly analyzing seismograms recorded by the 32-level, 3-component seismic array installed in the SAFOD Pilot Hole. We applied an inversion procedure to estimate spectral parameters for the omega-square model (spectral level and corner frequency) and Q to displacement amplitude spectra. Because we expect spectral parameters and Q to vary slowly with depth in the well, we impose a smoothness constraint on those parameters as a function of depth using a linear first-differenfee operator. This method correctly resolves corner frequency and Q, which leads to a more accurate estimation of source parameters than can be obtained from single sensors. The stress drop of one example of the SAFOD target repeating earthquake falls in the range of typical tectonic earthquakes. Copyright 2004 by the American Geophysical Union.

  9. REVIEW OF INDOOR EMISSION SOURCE MODELS: PART 2. PARAMETER ESTIMATION

    EPA Science Inventory

    This review consists of two sections. Part I provides an overview of 46 indoor emission source models. Part 2 (this paper) focuses on parameter estimation, a topic that is critical to modelers but has never been systematically discussed. A perfectly valid model may not be a usefu...

  10. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  11. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  12. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  13. Assessment of source-specific health effects associated with an unknown number of major sources of multiple air pollutants: a unified Bayesian approach.

    PubMed

    Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H

    2014-07-01

    There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Estimation and applicability of attenuation characteristics for source parameters and scaling relations in the Garhwal Kumaun Himalaya region, India

    NASA Astrophysics Data System (ADS)

    Singh, Rakesh; Paul, Ajay; Kumar, Arjun; Kumar, Parveen; Sundriyal, Y. P.

    2018-06-01

    Source parameters of the small to moderate earthquakes are significant for understanding the dynamic rupture process, the scaling relations of the earthquakes and for assessment of seismic hazard potential of a region. In this study, the source parameters were determined for 58 small to moderate size earthquakes (3.0 ≤ Mw ≤ 5.0) occurred during 2007-2015 in the Garhwal-Kumaun region. The estimated shear wave quality factor (Qβ(f)) values for each station at different frequencies have been applied to eliminate any bias in the determination of source parameters. The Qβ(f) values have been estimated by using coda wave normalization method in the frequency range 1.5-16 Hz. A frequency-dependent S wave quality factor relation is obtained as Qβ(f) = (152.9 ± 7) f(0.82±0.005) by fitting a power-law frequency dependence model for the estimated values over the whole study region. The spectral (low-frequency spectral level and corner frequency) and source (static stress drop, seismic moment, apparent stress and radiated energy) parameters are obtained assuming ω-2 source model. The displacement spectra are corrected for estimated frequency-dependent attenuation, site effect using spectral decay parameter "Kappa". The frequency resolution limit was resolved by quantifying the bias in corner frequencies, stress drop and radiated energy estimates due to finite-bandwidth effect. The data of the region shows shallow focused earthquakes with low stress drop. The estimation of Zúñiga parameter (ε) suggests the partial stress drop mechanism in the region. The observed low stress drop and apparent stress can be explained by partial stress drop and low effective stress model. Presence of subsurface fluid at seismogenic depth certainly manipulates the dynamics of the region. However, the limited event selection may strongly bias the scaling relation even after taking as much as possible precaution in considering effects of finite bandwidth, attenuation and site corrections. Although, the scaling can be improved further with the integration of large dataset of microearthquakes and use of a stable and robust approach.

  15. Estimating and Testing the Sources of Evoked Potentials in the Brain.

    ERIC Educational Resources Information Center

    Huizenga, Hilde M.; Molenaar, Peter C. M.

    1994-01-01

    The source of an event-related brain potential (ERP) is estimated from multivariate measures of ERP on the head under several mathematical and physical constraints on the parameters of the source model. Statistical aspects of estimation are discussed, and new tests are proposed. (SLD)

  16. A probabilistic approach for the estimation of earthquake source parameters from spectral inversion

    NASA Astrophysics Data System (ADS)

    Supino, M.; Festa, G.; Zollo, A.

    2017-12-01

    The amplitude spectrum of a seismic signal related to an earthquake source carries information about the size of the rupture, moment, stress and energy release. Furthermore, it can be used to characterize the Green's function of the medium crossed by the seismic waves. We describe the earthquake amplitude spectrum assuming a generalized Brune's (1970) source model, and direct P- and S-waves propagating in a layered velocity model, characterized by a frequency-independent Q attenuation factor. The observed displacement spectrum depends indeed on three source parameters, the seismic moment (through the low-frequency spectral level), the corner frequency (that is a proxy of the fault length) and the high-frequency decay parameter. These parameters are strongly correlated each other and with the quality factor Q; a rigorous estimation of the associated uncertainties and parameter resolution is thus needed to obtain reliable estimations.In this work, the uncertainties are characterized adopting a probabilistic approach for the parameter estimation. Assuming an L2-norm based misfit function, we perform a global exploration of the parameter space to find the absolute minimum of the cost function and then we explore the cost-function associated joint a-posteriori probability density function around such a minimum, to extract the correlation matrix of the parameters. The global exploration relies on building a Markov chain in the parameter space and on combining a deterministic minimization with a random exploration of the space (basin-hopping technique). The joint pdf is built from the misfit function using the maximum likelihood principle and assuming a Gaussian-like distribution of the parameters. It is then computed on a grid centered at the global minimum of the cost-function. The numerical integration of the pdf finally provides mean, variance and correlation matrix associated with the set of best-fit parameters describing the model. Synthetic tests are performed to investigate the robustness of the method and uncertainty propagation from the data-space to the parameter space. Finally, the method is applied to characterize the source parameters of the earthquakes occurring during the 2016-2017 Central Italy sequence, with the goal of investigating the source parameter scaling with magnitude.

  17. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  18. Acoustic source localization in mixed field using spherical microphone arrays

    NASA Astrophysics Data System (ADS)

    Huang, Qinghua; Wang, Tong

    2014-12-01

    Spherical microphone arrays have been used for source localization in three-dimensional space recently. In this paper, a two-stage algorithm is developed to localize mixed far-field and near-field acoustic sources in free-field environment. In the first stage, an array signal model is constructed in the spherical harmonics domain. The recurrent relation of spherical harmonics is independent of far-field and near-field mode strengths. Therefore, it is used to develop spherical estimating signal parameter via rotational invariance technique (ESPRIT)-like approach to estimate directions of arrival (DOAs) for both far-field and near-field sources. In the second stage, based on the estimated DOAs, simple one-dimensional MUSIC spectrum is exploited to distinguish far-field and near-field sources and estimate the ranges of near-field sources. The proposed algorithm can avoid multidimensional search and parameter pairing. Simulation results demonstrate the good performance for localizing far-field sources, or near-field ones, or mixed field sources.

  19. Automated source term and wind parameter estimation for atmospheric transport and dispersion applications

    NASA Astrophysics Data System (ADS)

    Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.

    2015-12-01

    Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.

  20. The Exponent of High-frequency Source Spectral Falloff and Contribution to Source Parameter Estimates

    NASA Astrophysics Data System (ADS)

    Kiuchi, R.; Mori, J. J.

    2015-12-01

    As a way to understand the characteristics of the earthquake source, studies of source parameters (such as radiated energy and stress drop) and their scaling are important. In order to estimate source parameters reliably, often we must use appropriate source spectrum models and the omega-square model is most frequently used. In this model, the spectrum is flat in lower frequencies and the falloff is proportional to the angular frequency squared. However, Some studies (e.g. Allmann and Shearer, 2009; Yagi et al., 2012) reported that the exponent of the high frequency falloff is other than -2. Therefore, in this study we estimate the source parameters using a spectral model for which the falloff exponent is not fixed. We analyze the mainshock and larger aftershocks of the 2008 Iwate-Miyagi Nairiku earthquake. Firstly, we calculate the P wave and SH wave spectra using empirical Green functions (EGF) to remove the path effect (such as attenuation) and site effect. For the EGF event, we select a smaller earthquake that is highly-correlated with the target event. In order to obtain the stable results, we calculate the spectral ratios using a multitaper spectrum analysis (Prieto et al., 2009). Then we take a geometric mean from multiple stations. Finally, using the obtained spectra ratios, we perform a grid search to determine the high frequency falloffs, as well as corner frequency of both of events. Our results indicate the high frequency falloff exponent is often less than 2.0. We do not observe any regional, focal mechanism, or depth dependencies for the falloff exponent. In addition, our estimated corner frequencies and falloff exponents are consistent between the P wave and SH wave analysis. In our presentation, we show differences in estimated source parameters using a fixed omega-square model and a model allowing variable high-frequency falloff.

  1. Evaluation for relationship among source parameters of underground nuclear tests in Northern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Kim, G.; Che, I. Y.

    2017-12-01

    We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.

  2. Bayesian estimation of source parameters and associated Coulomb failure stress changes for the 2005 Fukuoka (Japan) Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes

    2018-04-01

    Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.

  3. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  4. Determination of Destress Blasting Effectiveness Using Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Wojtecki, Łukasz; Mendecki, Maciej J.; Zuberek, Wacaław M.

    2017-12-01

    Underground mining of coal seams in the Upper Silesian Coal Basin is currently performed under difficult geological and mining conditions. The mining depth, dislocations (faults and folds) and mining remnants are responsible for rockburst hazard in the highest degree. This hazard can be minimized by using active rockburst prevention, where destress blastings play an important role. Destress blastings in coal seams aim to destress the local stress concentrations. These blastings are usually performed from the longwall face to decrease the stress level ahead of the longwall. An accurate estimation of active rockburst prevention effectiveness is important during mining under disadvantageous geological and mining conditions, which affect the risk of rockburst. Seismic source parameters characterize the focus of tremor, which may be useful in estimating the destress blasting effects. Investigated destress blastings were performed in coal seam no. 507 during its longwall mining in one of the coal mines in the Upper Silesian Coal Basin under difficult geological and mining conditions. The seismic source parameters of the provoked tremors were calculated. The presented preliminary investigations enable a rapid estimation of the destress blasting effectiveness using seismic source parameters, but further analysis in other geological and mining conditions with other blasting parameters is required.

  5. Supernovae as probes of cosmic parameters: estimating the bias from under-dense lines of sight

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busti, V.C.; Clarkson, C.; Holanda, R.F.L., E-mail: vinicius.busti@uct.ac.za, E-mail: holanda@uepb.edu.br, E-mail: chris.clarkson@uct.ac.za

    2013-11-01

    Correctly interpreting observations of sources such as type Ia supernovae (SNe Ia) require knowledge of the power spectrum of matter on AU scales — which is very hard to model accurately. Because under-dense regions account for much of the volume of the universe, light from a typical source probes a mean density significantly below the cosmic mean. The relative sparsity of sources implies that there could be a significant bias when inferring distances of SNe Ia, and consequently a bias in cosmological parameter estimation. While the weak lensing approximation should in principle give the correct prediction for this, linear perturbationmore » theory predicts an effectively infinite variance in the convergence for ultra-narrow beams. We attempt to quantify the effect typically under-dense lines of sight might have in parameter estimation by considering three alternative methods for estimating distances, in addition to the usual weak lensing approximation. We find in each case this not only increases the errors in the inferred density parameters, but also introduces a bias in the posterior value.« less

  6. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  7. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters.

    PubMed

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation.

  8. MEG Connectivity and Power Detections with Minimum Norm Estimates Require Different Regularization Parameters

    PubMed Central

    Hincapié, Ana-Sofía; Kujala, Jan; Mattout, Jérémie; Daligault, Sebastien; Delpuech, Claude; Mery, Domingo; Cosmelli, Diego; Jerbi, Karim

    2016-01-01

    Minimum Norm Estimation (MNE) is an inverse solution method widely used to reconstruct the source time series that underlie magnetoencephalography (MEG) data. MNE addresses the ill-posed nature of MEG source estimation through regularization (e.g., Tikhonov regularization). Selecting the best regularization parameter is a critical step. Generally, once set, it is common practice to keep the same coefficient throughout a study. However, it is yet to be known whether the optimal lambda for spectral power analysis of MEG source data coincides with the optimal regularization for source-level oscillatory coupling analysis. We addressed this question via extensive Monte-Carlo simulations of MEG data, where we generated 21,600 configurations of pairs of coupled sources with varying sizes, signal-to-noise ratio (SNR), and coupling strengths. Then, we searched for the Tikhonov regularization coefficients (lambda) that maximize detection performance for (a) power and (b) coherence. For coherence, the optimal lambda was two orders of magnitude smaller than the best lambda for power. Moreover, we found that the spatial extent of the interacting sources and SNR, but not the extent of coupling, were the main parameters affecting the best choice for lambda. Our findings suggest using less regularization when measuring oscillatory coupling compared to power estimation. PMID:27092179

  9. EVALUATING SOIL EROSION PARAMETER ESTIMATES FROM DIFFERENT DATA SOURCES

    EPA Science Inventory

    Topographic factors and soil loss estimates that were derived from thee data sources (STATSGO, 30-m DEM, and 3-arc second DEM) were compared. Slope magnitudes derived from the three data sources were consistently different. Slopes from the DEMs tended to provide a flattened sur...

  10. Parameter estimation for compact binary coalescence signals with the first generation gravitational-wave detector network

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abadie, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M.; Accadia, T.; Acernese, F.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Ajith, P.; Allen, B.; Allocca, A.; Amador Ceron, E.; Amariutei, D.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Ast, S.; Aston, S. M.; Astone, P.; Atkinson, D.; Aufmuth, P.; Aulbert, C.; Aylott, B. E.; Babak, S.; Baker, P.; Ballardin, G.; Ballmer, S.; Bao, Y.; Barayoga, J. C. B.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Basti, A.; Batch, J.; Bauchrowitz, J.; Bauer, Th. S.; Bebronne, M.; Beck, D.; Behnke, B.; Bejger, M.; Beker, M. G.; Bell, A. S.; Bell, C.; Belopolski, I.; Benacquista, M.; Berliner, J. M.; Bertolini, A.; Betzwieser, J.; Beveridge, N.; Beyersdorf, P. T.; Bhadbade, T.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Biswas, R.; Bitossi, M.; Bizouard, M. A.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Blom, M.; Bock, O.; Bodiya, T. P.; Bogan, C.; Bond, C.; Bondarescu, R.; Bondu, F.; Bonelli, L.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, S.; Bosi, L.; Bouhou, B.; Braccini, S.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Breyer, J.; Briant, T.; Bridges, D. O.; Brillet, A.; Brinkmann, M.; Brisson, V.; Britzger, M.; Brooks, A. F.; Brown, D. A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Burguet–Castell, J.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Calloni, E.; Camp, J. B.; Campsie, P.; Cannon, K.; Canuel, B.; Cao, J.; Capano, C. D.; Carbognani, F.; Carbone, L.; Caride, S.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C.; Cesarini, E.; Chalermsongsak, T.; Charlton, P.; Chassande-Mottin, E.; Chen, W.; Chen, X.; Chen, Y.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Chow, J.; Christensen, N.; Chua, S. S. Y.; Chung, C. T. Y.; Chung, S.; Ciani, G.; Clara, F.; Clark, D. E.; Clark, J. A.; Clayton, J. H.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colacino, C. N.; Colla, A.; Colombini, M.; Conte, A.; Conte, R.; Cook, D.; Corbitt, T. R.; Cordier, M.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M.; Coulon, J.-P.; Couvares, P.; Coward, D. M.; Cowart, M.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Cumming, A.; Cunningham, L.; Cuoco, E.; Cutler, R. M.; Dahl, K.; Damjanic, M.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dattilo, V.; Daudert, B.; Daveloza, H.; Davier, M.; Daw, E. J.; Dayanga, T.; De Rosa, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; Del Pozzo, W.; Dent, T.; Dergachev, V.; DeRosa, R.; Dhurandhar, S.; Di Fiore, L.; Di Lieto, A.; Di Palma, I.; Di Paolo Emilio, M.; Di Virgilio, A.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorsher, S.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Dumas, J.-C.; Dwyer, S.; Eberle, T.; Edgar, M.; Edwards, M.; Effler, A.; Ehrens, P.; Endrőczi, G.; Engel, R.; Etzel, T.; Evans, K.; Evans, M.; Evans, T.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Farr, B. F.; Farr, W. M.; Favata, M.; Fazi, D.; Fehrmann, H.; Feldbaum, D.; Feroz, F.; Ferrante, I.; Ferrini, F.; Fidecaro, F.; Finn, L. S.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Foley, S.; Forsi, E.; Forte, L. A.; Fotopoulos, N.; Fournier, J.-D.; Franc, J.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, M. A.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Friedrich, D.; Fritschel, P.; Frolov, V. V.; Fujimoto, M.-K.; Fulda, P. J.; Fyffe, M.; Gair, J.; Galimberti, M.; Gammaitoni, L.; Garcia, J.; Garufi, F.; Gáspár, M. E.; Gelencser, G.; Gemme, G.; Genin, E.; Gennai, A.; Gergely, L. Á.; Ghosh, S.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Giazotto, A.; Gil-Casanova, S.; Gill, C.; Gleason, J.; Goetz, E.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gray, C.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Griffo, C.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C.; Gupta, R.; Gustafson, E. K.; Gustafson, R.; Hallam, J. M.; Hammer, D.; Hammond, G.; Hanks, J.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Hayama, K.; Hayau, J.-F.; Heefner, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M. A.; Heng, I. S.; Heptonstall, A. W.; Herrera, V.; Heurs, M.; Hewitson, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Holt, K.; Holtrop, M.; Hong, T.; Hooper, S.; Hough, J.; Howell, E. J.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Ingram, D. R.; Inta, R.; Isogai, T.; Ivanov, A.; Izumi, K.; Jacobson, M.; James, E.; Jang, Y. J.; Jaranowski, P.; Jesse, E.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kasprzack, M.; Kasturi, R.; Katsavounidis, E.; Katzman, W.; Kaufer, H.; Kaufman, K.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Keitel, D.; Kelley, D.; Kells, W.; Keppel, D. G.; Keresztes, Z.; Khalaidovski, A.; Khalili, F. Y.; Khazanov, E. A.; Kim, B. K.; Kim, C.; Kim, H.; Kim, K.; Kim, N.; Kim, Y. M.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J.; Kokeyama, K.; Kondrashov, V.; Koranda, S.; Korth, W. Z.; Kowalska, I.; Kozak, D.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kurdyumov, R.; Kwee, P.; Lam, P. K.; Landry, M.; Langley, A.; Lantz, B.; Lastzka, N.; Lawrie, C.; Lazzarini, A.; Le Roux, A.; Leaci, P.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Leong, J. R.; Leonor, I.; Leroy, N.; Letendre, N.; Lhuillier, V.; Li, J.; Li, T. G. F.; Lindquist, P. E.; Litvine, V.; Liu, Y.; Liu, Z.; Lockerbie, N. A.; Lodhia, D.; Logue, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J.; Lubinski, M.; Lück, H.; Lundgren, A. P.; Macarthur, J.; Macdonald, E.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Mageswaran, M.; Mailand, K.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A.; Maros, E.; Marque, J.; Martelli, F.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Masserot, A.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McIntyre, G.; McIver, J.; Meadors, G. D.; Mehmet, M.; Meier, T.; Melatos, A.; Melissinos, A. C.; Mendell, G.; Menéndez, D. F.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miao, H.; Michel, C.; Milano, L.; Miller, J.; Minenkov, Y.; Mingarelli, C. M. F.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Mohan, M.; Mohapatra, S. R. P.; Moraru, D.; Moreno, G.; Morgado, N.; Morgia, A.; Mori, T.; Morriss, S. R.; Mosca, S.; Mossavi, K.; Mours, B.; Mow–Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, S.; Mullavey, A.; Müller-Ebhardt, H.; Munch, J.; Murphy, D.; Murray, P. G.; Mytidis, A.; Nash, T.; Naticchioni, L.; Necula, V.; Nelson, J.; Neri, I.; Newton, G.; Nguyen, T.; Nishizawa, A.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E.; Nuttall, L.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Oldenberg, R. G.; O'Reilly, B.; O'Shaughnessy, R.; Osthelder, C.; Ott, C. D.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Page, A.; Palladino, L.; Palomba, C.; Pan, Y.; Pankow, C.; Paoletti, F.; Paoletti, R.; Papa, M. A.; Parisi, M.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Pedraza, M.; Penn, S.; Perreca, A.; Persichetti, G.; Phelps, M.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pihlaja, M.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Poggiani, R.; Pöld, J.; Postiglione, F.; Poux, C.; Prato, M.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Quetschke, V.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Rakhmanov, M.; Ramet, C.; Rankins, B.; Rapagnani, P.; Raymond, V.; Re, V.; Reed, C. M.; Reed, T.; Regimbau, T.; Reid, S.; Reitze, D. H.; Ricci, F.; Riesen, R.; Riles, K.; Roberts, M.; Robertson, N. A.; Robinet, F.; Robinson, C.; Robinson, E. L.; Rocchi, A.; Roddy, S.; Rodriguez, C.; Rodruck, M.; Rolland, L.; Rollins, J. G.; Romano, R.; Romie, J. H.; Rosińska, D.; Röver, C.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Salemi, F.; Sammut, L.; Sandberg, V.; Sankar, S.; Sannibale, V.; Santamaría, L.; Santiago-Prieto, I.; Santostasi, G.; Saracco, E.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Savage, R. L.; Schilling, R.; Schnabel, R.; Schofield, R. M. S.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Seifert, F.; Sellers, D.; Sentenac, D.; Sergeev, A.; Shaddock, D. A.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siemens, X.; Sigg, D.; Simakov, D.; Singer, A.; Singer, L.; Sintes, A. M.; Skelton, G. R.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, R. J. E.; Smith-Lefebvre, N. D.; Somiya, K.; Sorazu, B.; Speirits, F. C.; Sperandio, L.; Stefszky, M.; Steinert, E.; Steinlechner, J.; Steinlechner, S.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S. E.; Stroeer, A. S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sung, M.; Susmithan, S.; Sutton, P. J.; Swinkels, B.; Szeifert, G.; Tacca, M.; Taffarello, L.; Talukder, D.; Tanner, D. B.; Tarabrin, S. P.; Taylor, R.; ter Braack, A. P. M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Thüring, A.; Titsler, C.; Tokmakov, K. V.; Tomlinson, C.; Toncelli, A.; Tonelli, M.; Torre, O.; Torres, C. V.; Torrie, C. I.; Tournefier, E.; Travasso, F.; Traylor, G.; Tse, M.; Ugolini, D.; Vahlbruch, H.; Vajente, G.; van den Brand, J. F. J.; Van Den Broeck, C.; van der Putten, S.; van Veggel, A. A.; Vass, S.; Vasuth, M.; Vaulin, R.; Vavoulidis, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Villar, A. E.; Vinet, J.-Y.; Vitale, S.; Vocca, H.; Vorvick, C.; Vyatchanin, S. P.; Wade, A.; Wade, L.; Wade, M.; Waldman, S. J.; Wallace, L.; Wan, Y.; Wang, M.; Wang, X.; Wanner, A.; Ward, R. L.; Was, M.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Wessels, P.; West, M.; Westphal, T.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; White, D. J.; Whiting, B. F.; Wiesner, K.; Wilkinson, C.; Willems, P. A.; Williams, L.; Williams, R.; Willke, B.; Wimmer, M.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Wittel, H.; Woan, G.; Wooley, R.; Worden, J.; Yablon, J.; Yakushin, I.; Yamamoto, H.; Yamamoto, K.; Yancey, C. C.; Yang, H.; Yeaton-Massey, D.; Yoshida, S.; Yvert, M.; Zadrożny, A.; Zanolin, M.; Zendri, J.-P.; Zhang, F.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zweizig, J.

    2013-09-01

    Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance, that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a “blind injection” where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron-star and black-hole binary parameter space over the component mass range 1M⊙-25M⊙ and the full range of spin parameters. The cases reported in this study provide a snapshot of the status of parameter estimation in preparation for the operation of advanced detectors.

  11. Linear and Nonlinear Time-Frequency Analysis for Parameter Estimation of Resident Space Objects

    DTIC Science & Technology

    2017-02-22

    AFRL-AFOSR-UK-TR-2017-0023 Linear and Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects Marco Martorella...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing   data sources, gathering and maintaining the...Nonlinear Time -Frequency Analysis for Parameter Estimation of Resident Space Objects 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-14-1-0183 5c.  PROGRAM

  12. Performance of maximum likelihood mixture models to estimate nursery habitat contributions to fish stocks: a case study on sea bream Sparus aurata

    PubMed Central

    Darnaude, Audrey M.

    2016-01-01

    Background Mixture models (MM) can be used to describe mixed stocks considering three sets of parameters: the total number of contributing sources, their chemical baseline signatures and their mixing proportions. When all nursery sources have been previously identified and sampled for juvenile fish to produce baseline nursery-signatures, mixing proportions are the only unknown set of parameters to be estimated from the mixed-stock data. Otherwise, the number of sources, as well as some/all nursery-signatures may need to be also estimated from the mixed-stock data. Our goal was to assess bias and uncertainty in these MM parameters when estimated using unconditional maximum likelihood approaches (ML-MM), under several incomplete sampling and nursery-signature separation scenarios. Methods We used a comprehensive dataset containing otolith elemental signatures of 301 juvenile Sparus aurata, sampled in three contrasting years (2008, 2010, 2011), from four distinct nursery habitats. (Mediterranean lagoons) Artificial nursery-source and mixed-stock datasets were produced considering: five different sampling scenarios where 0–4 lagoons were excluded from the nursery-source dataset and six nursery-signature separation scenarios that simulated data separated 0.5, 1.5, 2.5, 3.5, 4.5 and 5.5 standard deviations among nursery-signature centroids. Bias (BI) and uncertainty (SE) were computed to assess reliability for each of the three sets of MM parameters. Results Both bias and uncertainty in mixing proportion estimates were low (BI ≤ 0.14, SE ≤ 0.06) when all nursery-sources were sampled but exhibited large variability among cohorts and increased with the number of non-sampled sources up to BI = 0.24 and SE = 0.11. Bias and variability in baseline signature estimates also increased with the number of non-sampled sources, but tended to be less biased, and more uncertain than mixing proportion ones, across all sampling scenarios (BI < 0.13, SE < 0.29). Increasing separation among nursery signatures improved reliability of mixing proportion estimates, but lead to non-linear responses in baseline signature parameters. Low uncertainty, but a consistent underestimation bias affected the estimated number of nursery sources, across all incomplete sampling scenarios. Discussion ML-MM produced reliable estimates of mixing proportions and nursery-signatures under an important range of incomplete sampling and nursery-signature separation scenarios. This method failed, however, in estimating the true number of nursery sources, reflecting a pervasive issue affecting mixture models, within and beyond the ML framework. Large differences in bias and uncertainty found among cohorts were linked to differences in separation of chemical signatures among nursery habitats. Simulation approaches, such as those presented here, could be useful to evaluate sensitivity of MM results to separation and variability in nursery-signatures for other species, habitats or cohorts. PMID:27761305

  13. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Model parameter estimations from residual gravity anomalies due to simple-shaped sources using Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Ekinci, Yunus Levent; Balkaya, Çağlayan; Göktürkler, Gökhan; Turan, Seçil

    2016-06-01

    An efficient approach to estimate model parameters from residual gravity data based on differential evolution (DE), a stochastic vector-based metaheuristic algorithm, has been presented. We have showed the applicability and effectiveness of this algorithm on both synthetic and field anomalies. According to our knowledge, this is a first attempt of applying DE for the parameter estimations of residual gravity anomalies due to isolated causative sources embedded in the subsurface. The model parameters dealt with here are the amplitude coefficient (A), the depth and exact origin of causative source (zo and xo, respectively) and the shape factors (q and ƞ). The error energy maps generated for some parameter pairs have successfully revealed the nature of the parameter estimation problem under consideration. Noise-free and noisy synthetic single gravity anomalies have been evaluated with success via DE/best/1/bin, which is a widely used strategy in DE. Additionally some complicated gravity anomalies caused by multiple source bodies have been considered, and the results obtained have showed the efficiency of the algorithm. Then using the strategy applied in synthetic examples some field anomalies observed for various mineral explorations such as a chromite deposit (Camaguey district, Cuba), a manganese deposit (Nagpur, India) and a base metal sulphide deposit (Quebec, Canada) have been considered to estimate the model parameters of the ore bodies. Applications have exhibited that the obtained results such as the depths and shapes of the ore bodies are quite consistent with those published in the literature. Uncertainty in the solutions obtained from DE algorithm has been also investigated by Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing without cooling schedule. Based on the resulting histogram reconstructions of both synthetic and field data examples the algorithm has provided reliable parameter estimations being within the sampling limits of M-H sampler. Although it is not a common inversion technique in geophysics, it can be stated that DE algorithm is worth to get more interest for parameter estimations from potential field data in geophysics considering its good accuracy, less computational cost (in the present problem) and the fact that a well-constructed initial guess is not required to reach the global minimum.

  15. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  16. Assessing the Uncertainties on Seismic Source Parameters: Towards Realistic Estimates of Moment Tensor Determinations

    NASA Astrophysics Data System (ADS)

    Magnoni, F.; Scognamiglio, L.; Tinti, E.; Casarotti, E.

    2014-12-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Moment tensor catalogues are ordinarily used by geoscientists, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their own analysis. The 2012 May 20 Emilia mainshock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. An uncertainty of ~0.5 units in magnitude leads to a controversial knowledge of the real size of the event. The possible uncertainty associated to this estimate could be critical for the inference of other seismological parameters, suggesting caution for seismic hazard assessment, coulomb stress transfer determination and other analyses where self-consistency is important. In this work, we focus on the variability of the moment tensor solution, highlighting the effect of four different velocity models, different types and ranges of filtering, and two different methodologies. Using a larger dataset, to better quantify the source parameter uncertainty, we also analyze the variability of the moment tensor solutions depending on the number, the epicentral distance and the azimuth of used stations. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, cannot be considered an absolute value and requires to come out with the related uncertainties and in a reproducible framework characterized by disclosed assumptions and explicit processing workflows.

  17. SP Response to a Line Source Infiltration for Characterizing the Vadose Zone: Forward Modeling and Inversion

    NASA Astrophysics Data System (ADS)

    Sailhac, P.

    2004-05-01

    Field estimation of soil water flux has direct application for water resource management. Standard hydrologic methods like tensiometry or TDR are often difficult to apply because of the heterogeneity of the subsurface, and non invasive tools like ERT, NMR or GPR are limited to the estimation of the water content. Electrical Streaming Potential (SP) monitoring can provide a cost-effective tool to help estimate the nature of the hydraulic transfers (infiltration or evaporation) in the vadose zone. Indeed this technique has improved during the last decade and has been shown to be a useful tool for quantitative groundwater flow characterization (see the poster of Marquis et al. for a review). We now account for our latest development on the possibility of using SP for estimating hydraulic parameters of unsaturated soils from in situ SP measurements during infiltration experiments. The proposed method consists in SP profiling perpendicularly to a line source of steady-state infiltration. Analytic expressions for the forward modeling show a sensitivity to six parameters: the electrokinetic coupling parameter at saturation CS, the soil sorptive number α , the ratio of the constant source strength to the hydraulic conductivity at saturation q/KS, the soil effective water saturation prior to the infiltration experiment Se0, Mualem parameter m, and Archie law exponent n. In applications, all these parameters could be constrained by inverting electrokinetic data obtained during a series of infiltration experiments with varying source strength q.

  18. Volcanic eruption source parameters from active and passive microwave sensors

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi

    2016-04-01

    It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly appealing for estimates close to the volcano emission source. Near the source the cloud optical thickness is expected to be large enough to induce saturation effects at the infrared sensor receiver thus vanishing the brightness temperature difference methods for the ash cloud identification. In the light of the introduction above, some case studies at Eyjafjallajökull 2010 (Iceland), Etna (Italy) and Calbuco (Cile), on 5-10 May 2010, 23rd Nov., 2013 and 23 Apr., 2015, respectively, are analysed in terms of source parameter estimates (manly the cloud top and mass flax rate) from ground based microwave weather radar (9.6 GHz) and satellite Low Earth Orbit microwave radiometers (50 - 183 GH). A special highlight will be given to the advantages and limitations of microwave-related products with respect to more conventional tools.

  19. Annual survival of Snail Kites in Florida: Radio telemetry versus capture-resighting data

    USGS Publications Warehouse

    Bennetts, R.E.; Dreitz, V.J.; Kitchens, W.M.; Hines, J.E.; Nichols, J.D.

    1999-01-01

    We estimated annual survival of Snail Kites (Rostrhamus sociabilis) in Florida using the Kaplan-Meier estimator with data from 271 radio-tagged birds over a three-year period and capture-recapture (resighting) models with data from 1,319 banded birds over a six-year period. We tested the hypothesis that survival differed among three age classes using both data sources. We tested additional hypotheses about spatial and temporal variation using a combination of data from radio telemetry and single- and multistrata capture-recapture models. Results from these data sets were similar in their indications of the sources of variation in survival, but they differed in some parameter estimates. Both data sources indicated that survival was higher for adults than for juveniles, but they did not support delineation of a subadult age class. Our data also indicated that survival differed among years and regions for juveniles but not for adults. Estimates of juvenile survival using radio telemetry data were higher than estimates using capture-recapture models for two of three years (1992 and 1993). Ancillary evidence based on censored birds indicated that some mortality of radio-tagged juveniles went undetected during those years, resulting in biased estimates. Thus, we have greater confidence in our estimates of juvenile survival using capture-recapture models. Precision of estimates reflected the number of parameters estimated and was surprisingly similar between radio telemetry and single-stratum capture-recapture models, given the substantial differences in sample sizes. Not having to estimate resighting probability likely offsets, to some degree, the smaller sample sizes from our radio telemetry data. Precision of capture-recapture models was lower using multistrata models where region-specific parameters were estimated than using single-stratum models, where spatial variation in parameters was not taken into account.

  20. Examples of Nonconservatism in the CARE 3 Program

    NASA Technical Reports Server (NTRS)

    Dotson, Kelly J.

    1988-01-01

    This paper presents parameter regions in the CARE 3 (Computer-Aided Reliability Estimation version 3) computer program where the program overestimates the reliability of a modeled system without warning the user. Five simple models of fault-tolerant computer systems are analyzed; and, the parameter regions where reliability is overestimated are given. The source of the error in the reliability estimates for models which incorporate transient fault occurrences was not readily apparent. However, the source of much of the error for models with permanent and intermittent faults can be attributed to the choice of values for the run-time parameters of the program.

  1. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    USGS Publications Warehouse

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 3 software suite can be compiled for Microsoft Windows®4 and Linux®5 operating systems; the source code is available in a Microsoft Visual Studio®6 2013 solution; Linux Makefiles are also provided. PEST++ Version 3 continues to build a foundation for an open-source framework capable of producing robust and efficient parameter estimation tools for large environmental models.

  2. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  3. Added-value joint source modelling of seismic and geodetic data

    NASA Astrophysics Data System (ADS)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source model parameters. The source model product then features parameter uncertainty estimates and reveals parameter trade-offs that arise from imperfect data coverage and data errors. We applied our new source modelling approach to the 2010 Haiti earthquake for which a number of apparently different seismic, geodetic and joint source models has been reported already - mostly without any model parameter estimations. We here show that the variability of all these source models seems to arise from inherent model parameter trade-offs and mostly has little statistical significance, e.g. even using a large dataset comprising seismic and geodetic data the confidence interval of the fault dip remains as wide as about 20 degrees.

  4. A general model for attitude determination error analysis

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Seidewitz, ED; Nicholson, Mark

    1988-01-01

    An overview is given of a comprehensive approach to filter and dynamics modeling for attitude determination error analysis. The models presented include both batch least-squares and sequential attitude estimation processes for both spin-stabilized and three-axis stabilized spacecraft. The discussion includes a brief description of a dynamics model of strapdown gyros, but it does not cover other sensor models. Model parameters can be chosen to be solve-for parameters, which are assumed to be estimated as part of the determination process, or consider parameters, which are assumed to have errors but not to be estimated. The only restriction on this choice is that the time evolution of the consider parameters must not depend on any of the solve-for parameters. The result of an error analysis is an indication of the contributions of the various error sources to the uncertainties in the determination of the spacecraft solve-for parameters. The model presented gives the uncertainty due to errors in the a priori estimates of the solve-for parameters, the uncertainty due to measurement noise, the uncertainty due to dynamic noise (also known as process noise or measurement noise), the uncertainty due to the consider parameters, and the overall uncertainty due to all these sources of error.

  5. Grid-search Moment Tensor Estimation: Implementation and CTBT-related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J. C.; Baker, B. I.; Rozhkov, M.; Friberg, P. A.; Leifer, J. M.

    2017-12-01

    This abstract presents a review work related to moment tensor estimation for Expert Technical Analysis at the Comprehensive Test Ban Treaty Organization. In this context of event characterization, estimation of key source parameters provide important insights into the nature of failure in the earth. For example, if the recovered source parameters are indicative of a shallow source with large isotropic component then one conclusion is that it is a human-triggered explosive event. However, an important follow-up question in this application is - does an alternative hypothesis like a deeper source with a large double couple component explain the data approximately as well as the best solution? Here we address the issue of both finding a most likely source and assessing its uncertainty. Using the uniform moment tensor discretization of Tape and Tape (2015) we exhaustively interrogate and tabulate the source eigenvalue distribution (i.e., the source characterization), tensor orientation, magnitude, and source depth. The benefit of the grid-search is that we can quantitatively assess the extent to which model parameters are resolved. This provides a valuable opportunity during the assessment phase to focus interpretation on source parameters that are well-resolved. Another benefit of the grid-search is that it proves to be a flexible framework where different pieces of information can be easily incorporated. To this end, this work is particularly interested in fitting teleseismic body waves and regional surface waves as well as incorporating teleseismic first motions when available. Being that the moment tensor search methodology is well-established we primarily focus on the implementation and application. We present a highly scalable strategy for systematically inspecting the entire model parameter space. We then focus on application to regional and teleseismic data recorded during a handful of natural and anthropogenic events, report on the grid-search optimum, and discuss the resolution of interesting and/or important recovered source properties.

  6. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  7. Effects of structural error on the estimates of parameters of dynamical systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  8. Density estimation in tiger populations: combining information for strong inference

    USGS Publications Warehouse

    Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.

    2012-01-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  9. Density estimation in tiger populations: combining information for strong inference.

    PubMed

    Gopalaswamy, Arjun M; Royle, J Andrew; Delampady, Mohan; Nichols, James D; Karanth, K Ullas; Macdonald, David W

    2012-07-01

    A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture-recapture data. The model, which combined information, provided the most precise estimate of density (8.5 +/- 1.95 tigers/100 km2 [posterior mean +/- SD]) relative to a model that utilized only one data source (photographic, 12.02 +/- 3.02 tigers/100 km2 and fecal DNA, 6.65 +/- 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.

  10. Parameter estimation accuracies of Galactic binaries with eLISA

    NASA Astrophysics Data System (ADS)

    Błaut, Arkadiusz

    2018-09-01

    We study parameter estimation accuracy of nearly monochromatic sources of gravitational waves with the future eLISA-like detectors. eLISA will be capable of observing millions of such signals generated by orbiting pairs of compact binaries consisting of white dwarf, neutron star or black hole and to resolve and estimate parameters of several thousands of them providing crucial information regarding their orbital dynamics, formation rates and evolutionary paths. Using the Fisher matrix analysis we compare accuracies of the estimated parameters for different mission designs defined by the GOAT advisory team established to asses the scientific capabilities and the technological issues of the eLISA-like missions.

  11. Quantum metrology of spatial deformation using arrays of classical and quantum light emitters

    NASA Astrophysics Data System (ADS)

    Sidhu, Jasminder S.; Kok, Pieter

    2017-06-01

    We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.

  12. Uncertainties for seismic moment tensors and applications to nuclear explosions, volcanic events, and earthquakes

    NASA Astrophysics Data System (ADS)

    Tape, C.; Alvizuri, C. R.; Silwal, V.; Tape, W.

    2017-12-01

    When considered as a point source, a seismic source can be characterized in terms of its origin time, hypocenter, moment tensor, and source time function. The seismologist's task is to estimate these parameters--and their uncertainties--from three-component ground motion recorded at irregularly spaced stations. We will focus on one portion of this problem: the estimation of the moment tensor and its uncertainties. With magnitude estimated separately, we are left with five parameters describing the normalized moment tensor. A lune of normalized eigenvalue triples can be used to visualize the two parameters (lune longitude and lune latitude) describing the source type, while the conventional strike, dip, and rake angles can be used to characterize the orientation. Slight modifications of these five parameters lead to a uniform parameterization of moment tensors--uniform in the sense that equal volumes in the coordinate domain of the parameterization correspond to equal volumes of moment tensors. For a moment tensor m that we have inferred from seismic data for an earthquake, we define P(V) to be the probability that the true moment tensor for the earthquake lies in the neighborhood of m that has fractional volume V. The average value of P(V) is then a measure of our confidence in our inference of m. The calculation of P(V) requires knowing both the probability P(w) and the fractional volume V(w) of the set of moment tensors within a given angular radius w of m. We apply this approach to several different data sets, including nuclear explosions from the Nevada Test Site, volcanic events from Uturuncu (Bolivia), and earthquakes. Several challenges remain: choosing an appropriate misfit function, handling time shifts between data and synthetic waveforms, and extending the uncertainty estimation to include more source parameters (e.g., hypocenter and source time function).

  13. SENSITIVITY OF STRUCTURAL RESPONSE TO GROUND MOTION SOURCE AND SITE PARAMETERS.

    USGS Publications Warehouse

    Safak, Erdal; Brebbia, C.A.; Cakmak, A.S.; Abdel Ghaffar, A.M.

    1985-01-01

    Designing structures to withstand earthquakes requires an accurate estimation of the expected ground motion. While engineers use the peak ground acceleration (PGA) to model the strong ground motion, seismologists use physical characteristics of the source and the rupture mechanism, such as fault length, stress drop, shear wave velocity, seismic moment, distance, and attenuation. This study presents a method for calculating response spectra from seismological models using random vibration theory. It then investigates the effect of various source and site parameters on peak response. Calculations are based on a nonstationary stochastic ground motion model, which can incorporate all the parameters both in frequency and time domains. The estimation of the peak response accounts for the effects of the non-stationarity, bandwidth and peak correlations of the response.

  14. Shallow seismic source parameter determination using intermediate-period surface wave amplitude spectra

    NASA Astrophysics Data System (ADS)

    Fox, Benjamin D.; Selby, Neil D.; Heyburn, Ross; Woodhouse, John H.

    2012-09-01

    Estimating reliable depths for shallow seismic sources is important in both seismo-tectonic studies and in seismic discrimination studies. Surface wave excitation is sensitive to source depth, especially at intermediate and short-periods, owing to the approximate exponential decay of surface wave displacements with depth. A new method is presented here to retrieve earthquake source parameters from regional and teleseismic intermediate period (100-15 s) fundamental-mode surface wave recordings. This method makes use of advances in mapping global dispersion to allow higher frequency surface wave recordings at regional and teleseismic distances to be used with more confidence than in previous studies and hence improve the resolution of depth estimates. Synthetic amplitude spectra are generated using surface wave theory combined with a great circle path approximation, and a grid of double-couple sources are compared with the data. Source parameters producing the best-fitting amplitude spectra are identified by minimizing the least-squares misfit in logarithmic amplitude space. The F-test is used to search the solution space for statistically acceptable parameters and the ranges of these variables are used to place constraints on the best-fitting source. Estimates of focal mechanism, depth and scalar seismic moment are determined for 20 small to moderate sized (4.3 ≤Mw≤ 6.4) earthquakes. These earthquakes are situated across a wide range of geographic and tectonic locations and describe a range of faulting styles over the depth range 4-29 km. For the larger earthquakes, comparisons with other studies are favourable, however existing source determination procedures, such as the CMT technique, cannot be performed for the smaller events. By reducing the magnitude threshold at which robust source parameters can be determined, the accuracy, especially at shallow depths, of seismo-tectonic studies, seismic hazard assessments, and seismic discrimination investigations can be improved by the application of this methodology.

  15. About the Modeling of Radio Source Time Series as Linear Splines

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  16. Estimating Convection Parameters in the GFDL CM2.1 Model Using Ensemble Data Assimilation

    NASA Astrophysics Data System (ADS)

    Li, Shan; Zhang, Shaoqing; Liu, Zhengyu; Lu, Lv; Zhu, Jiang; Zhang, Xuefeng; Wu, Xinrong; Zhao, Ming; Vecchi, Gabriel A.; Zhang, Rong-Hua; Lin, Xiaopei

    2018-04-01

    Parametric uncertainty in convection parameterization is one major source of model errors that cause model climate drift. Convection parameter tuning has been widely studied in atmospheric models to help mitigate the problem. However, in a fully coupled general circulation model (CGCM), convection parameters which impact the ocean as well as the climate simulation may have different optimal values. This study explores the possibility of estimating convection parameters with an ensemble coupled data assimilation method in a CGCM. Impacts of the convection parameter estimation on climate analysis and forecast are analyzed. In a twin experiment framework, five convection parameters in the GFDL coupled model CM2.1 are estimated individually and simultaneously under both perfect and imperfect model regimes. Results show that the ensemble data assimilation method can help reduce the bias in convection parameters. With estimated convection parameters, the analyses and forecasts for both the atmosphere and the ocean are generally improved. It is also found that information in low latitudes is relatively more important for estimating convection parameters. This study further suggests that when important parameters in appropriate physical parameterizations are identified, incorporating their estimation into traditional ensemble data assimilation procedure could improve the final analysis and climate prediction.

  17. Undersampling power-law size distributions: effect on the assessment of extreme natural hazards

    USGS Publications Warehouse

    Geist, Eric L.; Parsons, Thomas E.

    2014-01-01

    The effect of undersampling on estimating the size of extreme natural hazards from historical data is examined. Tests using synthetic catalogs indicate that the tail of an empirical size distribution sampled from a pure Pareto probability distribution can range from having one-to-several unusually large events to appearing depleted, relative to the parent distribution. Both of these effects are artifacts caused by limited catalog length. It is more difficult to diagnose the artificially depleted empirical distributions, since one expects that a pure Pareto distribution is physically limited in some way. Using maximum likelihood methods and the method of moments, we estimate the power-law exponent and the corner size parameter of tapered Pareto distributions for several natural hazard examples: tsunamis, floods, and earthquakes. Each of these examples has varying catalog lengths and measurement thresholds, relative to the largest event sizes. In many cases where there are only several orders of magnitude between the measurement threshold and the largest events, joint two-parameter estimation techniques are necessary to account for estimation dependence between the power-law scaling exponent and the corner size parameter. Results indicate that whereas the corner size parameter of a tapered Pareto distribution can be estimated, its upper confidence bound cannot be determined and the estimate itself is often unstable with time. Correspondingly, one cannot statistically reject a pure Pareto null hypothesis using natural hazard catalog data. Although physical limits to the hazard source size and by attenuation mechanisms from source to site constrain the maximum hazard size, historical data alone often cannot reliably determine the corner size parameter. Probabilistic assessments incorporating theoretical constraints on source size and propagation effects are preferred over deterministic assessments of extreme natural hazards based on historic data.

  18. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation

    NASA Astrophysics Data System (ADS)

    Muhammad, Ario; Goda, Katsuichiro

    2018-03-01

    This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions.

  19. Features of Radiation and Propagation of Seismic Waves in the Northern Caucasus: Manifestations in Regional Coda

    NASA Astrophysics Data System (ADS)

    Kromskii, S. D.; Pavlenko, O. V.; Gabsatarova, I. P.

    2018-03-01

    Based on the Anapa (ANN) seismic station records of 40 earthquakes ( M W > 3.9) that occurred within 300 km of the station since 2002 up to the present time, the source parameters and quality factor of the Earth's crust ( Q( f)) and upper mantle are estimated for the S-waves in the 1-8 Hz frequency band. The regional coda analysis techniques which allow separating the effects associated with seismic source (source effects) and with the propagation path of seismic waves (path effects) are employed. The Q-factor estimates are obtained in the form Q( f) = 90 × f 0.7 for the epicentral distances r < 120 km and in the form Q( f) = 90 × f1.0 for r > 120 km. The established Q( f) and source parameters are close to the estimates for Central Japan, which is probably due to the similar tectonic structure of the regions. The shapes of the source parameters are found to be independent of the magnitude of the earthquakes in the magnitude range 3.9-5.6; however, the radiation of the high-frequency components ( f > 4-5 Hz) is enhanced with the depth of the source (down to h 60 km). The estimates Q( f) of the quality factor determined from the records by the Sochi, Anapa, and Kislovodsk seismic stations allowed a more accurate determination of the seismic moments and magnitudes of the Caucasian earthquakes. The studies will be continued for obtaining the Q( f) estimates, geometrical spreading functions, and frequency-dependent amplification of seismic waves in the Earth's crust in the other regions of the Northern Caucasus.

  20. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems

    PubMed Central

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-01-01

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896

  1. Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.

    PubMed

    Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao

    2016-03-12

    In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.

  2. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  3. Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.

    PubMed

    Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun

    2018-06-04

    Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.

  4. Near real-time estimation of the seismic source parameters in a compressed domain

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ismael A. Vera

    Seismic events can be characterized by its origin time, location and moment tensor. Fast estimations of these source parameters are important in areas of geophysics like earthquake seismology, and the monitoring of seismic activity produced by volcanoes, mining operations and hydraulic injections in geothermal and oil and gas reservoirs. Most available monitoring systems estimate the source parameters in a sequential procedure: first determining origin time and location (e.g., epicentre, hypocentre or centroid of the stress glut density), and then using this information to initialize the evaluation of the moment tensor. A more efficient estimation of the source parameters requires a concurrent evaluation of the three variables. The main objective of the present thesis is to address the simultaneous estimation of origin time, location and moment tensor of seismic events. The proposed method displays the benefits of being: 1) automatic, 2) continuous and, depending on the scale of application, 3) of providing results in real-time or near real-time. The inversion algorithm is based on theoretical results from sparse representation theory and compressive sensing. The feasibility of implementation is determined through the analysis of synthetic and real data examples. The numerical experiments focus on the microseismic monitoring of hydraulic fractures in oil and gas wells, however, an example using real earthquake data is also presented for validation. The thesis is complemented with a resolvability analysis of the moment tensor. The analysis targets common monitoring geometries employed in hydraulic fracturing in oil wells. Additionally, it is presented an application of sparse representation theory for the denoising of one-component and three-component microseismicity records, and an algorithm for improved automatic time-picking using non-linear inversion constraints.

  5. Centroid estimation for a Shack-Hartmann wavefront sensor based on stream processing.

    PubMed

    Kong, Fanpeng; Polo, Manuel Cegarra; Lambert, Andrew

    2017-08-10

    Using center of gravity to estimate the centroid of the spot in a Shack-Hartmann wavefront sensor, the measurement corrupts with photon and detector noise. Parameters, like window size, often require careful optimization to balance the noise error, dynamic range, and linearity of the response coefficient under different photon flux. It also needs to be substituted by the correlation method for extended sources. We propose a centroid estimator based on stream processing, where the center of gravity calculation window floats with the incoming pixel from the detector. In comparison with conventional methods, we show that the proposed estimator simplifies the choice of optimized parameters, provides a unit linear coefficient response, and reduces the influence of background and noise. It is shown that the stream-based centroid estimator also works well for limited size extended sources. A hardware implementation of the proposed estimator is discussed.

  6. Simultaneous emission and transmission scanning in PET oncology: the effect on parameter estimation

    NASA Astrophysics Data System (ADS)

    Meikle, S. R.; Eberl, S.; Hooper, P. K.; Fulham, M. J.

    1997-02-01

    The authors investigated potential sources of bias due to simultaneous emission and transmission (SET) scanning and their effect on parameter estimation in dynamic positron emission tomography (PET) oncology studies. The sources of bias considered include: i) variation in transmission spillover (into the emission window) throughout the field of view, ii) increased scatter arising from rod sources, and iii) inaccurate deadtime correction. Net bias was calculated as a function of the emission count rate and used to predict distortion in [/sup 18/F]2-fluoro-2-deoxy-D-glucose (FDG) and [/sup 11/C]thymidine tissue curves simulating the normal liver and metastatic involvement of the liver. The effect on parameter estimates was assessed by spectral analysis and compartmental modeling. The various sources of bias approximately cancel during the early part of the study when count rate is maximal. Scatter dominates in the latter part of the study, causing apparently decreased tracer clearance which is more marked for thymidine than for FDG. The irreversible disposal rate constant, K/sub i/, was overestimated by <10% for FDG and >30% for thymidine. The authors conclude that SET has a potential role in dynamic FDG PET but is not suitable for /sup 11/C-labeled compounds.

  7. Quasar microlensing models with constraints on the Quasar light curves

    NASA Astrophysics Data System (ADS)

    Tie, S. S.; Kochanek, C. S.

    2018-01-01

    Quasar microlensing analyses implicitly generate a model of the variability of the source quasar. The implied source variability may be unrealistic yet its likelihood is generally not evaluated. We used the damped random walk (DRW) model for quasar variability to evaluate the likelihood of the source variability and applied the revized algorithm to a microlensing analysis of the lensed quasar RX J1131-1231. We compared estimates of the size of the quasar disc and the average stellar mass of the lens galaxy with and without applying the DRW likelihoods for the source variability model and found no significant effect on the estimated physical parameters. The most likely explanation is that unreliastic source light-curve models are generally associated with poor microlensing fits that already make a negligible contribution to the probability distributions of the derived parameters.

  8. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  9. Assimilation of concentration measurements for retrieving multiple point releases in atmosphere: A least-squares approach to inverse modelling

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Rani, Raj

    2015-10-01

    The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.

  10. Part 2. Development of Enhanced Statistical Methods for Assessing Health Effects Associated with an Unknown Number of Major Sources of Multiple Air Pollutants.

    PubMed

    Park, Eun Sug; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford

    2015-06-01

    A major difficulty with assessing source-specific health effects is that source-specific exposures cannot be measured directly; rather, they need to be estimated by a source-apportionment method such as multivariate receptor modeling. The uncertainty in source apportionment (uncertainty in source-specific exposure estimates and model uncertainty due to the unknown number of sources and identifiability conditions) has been largely ignored in previous studies. Also, spatial dependence of multipollutant data collected from multiple monitoring sites has not yet been incorporated into multivariate receptor modeling. The objectives of this project are (1) to develop a multipollutant approach that incorporates both sources of uncertainty in source-apportionment into the assessment of source-specific health effects and (2) to develop enhanced multivariate receptor models that can account for spatial correlations in the multipollutant data collected from multiple sites. We employed a Bayesian hierarchical modeling framework consisting of multivariate receptor models, health-effects models, and a hierarchical model on latent source contributions. For the health model, we focused on the time-series design in this project. Each combination of number of sources and identifiability conditions (additional constraints on model parameters) defines a different model. We built a set of plausible models with extensive exploratory data analyses and with information from previous studies, and then computed posterior model probability to estimate model uncertainty. Parameter estimation and model uncertainty estimation were implemented simultaneously by Markov chain Monte Carlo (MCMC*) methods. We validated the methods using simulated data. We illustrated the methods using PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter) speciation data and mortality data from Phoenix, Arizona, and Houston, Texas. The Phoenix data included counts of cardiovascular deaths and daily PM2.5 speciation data from 1995-1997. The Houston data included respiratory mortality data and 24-hour PM2.5 speciation data sampled every six days from a region near the Houston Ship Channel in years 2002-2005. We also developed a Bayesian spatial multivariate receptor modeling approach that, while simultaneously dealing with the unknown number of sources and identifiability conditions, incorporated spatial correlations in the multipollutant data collected from multiple sites into the estimation of source profiles and contributions based on the discrete process convolution model for multivariate spatial processes. This new modeling approach was applied to 24-hour ambient air concentrations of 17 volatile organic compounds (VOCs) measured at nine monitoring sites in Harris County, Texas, during years 2000 to 2005. Simulation results indicated that our methods were accurate in identifying the true model and estimated parameters were close to the true values. The results from our methods agreed in general with previous studies on the source apportionment of the Phoenix data in terms of estimated source profiles and contributions. However, we had a greater number of statistically insignificant findings, which was likely a natural consequence of incorporating uncertainty in the estimated source contributions into the health-effects parameter estimation. For the Houston data, a model with five sources (that seemed to be Sulfate-Rich Secondary Aerosol, Motor Vehicles, Industrial Combustion, Soil/Crustal Matter, and Sea Salt) showed the highest posterior model probability among the candidate models considered when fitted simultaneously to the PM2.5 and mortality data. There was a statistically significant positive association between respiratory mortality and same-day PM2.5 concentrations attributed to one of the sources (probably industrial combustion). The Bayesian spatial multivariate receptor modeling approach applied to the VOC data led to a highest posterior model probability for a model with five sources (that seemed to be refinery, petrochemical production, gasoline evaporation, natural gas, and vehicular exhaust) among several candidate models, with the number of sources varying between three and seven and with different identifiability conditions. Our multipollutant approach assessing source-specific health effects is more advantageous than a single-pollutant approach in that it can estimate total health effects from multiple pollutants and can also identify emission sources that are responsible for adverse health effects. Our Bayesian approach can incorporate not only uncertainty in the estimated source contributions, but also model uncertainty that has not been addressed in previous studies on assessing source-specific health effects. The new Bayesian spatial multivariate receptor modeling approach enables predictions of source contributions at unmonitored sites, minimizing exposure misclassification and providing improved exposure estimates along with their uncertainty estimates, as well as accounting for uncertainty in the number of sources and identifiability conditions.

  11. Non-linear Parameter Estimates from Non-stationary MEG Data

    PubMed Central

    Martínez-Vargas, Juan D.; López, Jose D.; Baker, Adam; Castellanos-Dominguez, German; Woolrich, Mark W.; Barnes, Gareth

    2016-01-01

    We demonstrate a method to estimate key electrophysiological parameters from resting state data. In this paper, we focus on the estimation of head-position parameters. The recovery of these parameters is especially challenging as they are non-linearly related to the measured field. In order to do this we use an empirical Bayesian scheme to estimate the cortical current distribution due to a range of laterally shifted head-models. We compare different methods of approaching this problem from the division of M/EEG data into stationary sections and performing separate source inversions, to explaining all of the M/EEG data with a single inversion. We demonstrate this through estimation of head position in both simulated and empirical resting state MEG data collected using a head-cast. PMID:27597815

  12. Dynamic Statistical Characterization of Variation in Source Processes of Microseismic Events

    NASA Astrophysics Data System (ADS)

    Smith-Boughner, L.; Viegas, G. F.; Urbancic, T.; Baig, A. M.

    2015-12-01

    During a hydraulic fracture, water is pumped at high pressure into a formation. A proppant, typically sand is later injected in the hope that it will make its way into a fracture, keep it open and provide a path for the hydrocarbon to enter the well. This injection can create micro-earthquakes, generated by deformation within the reservoir during treatment. When these injections are monitored, thousands of microseismic events are recorded within several hundred cubic meters. For each well-located event, many source parameters are estimated e.g. stress drop, Savage-Wood efficiency and apparent stress. However, because we are evaluating outputs from a power-law process, the extent to which the failure is impacted by fluid injection or stress triggering is not immediately clear. To better detect differences in source processes, we use a set of dynamic statistical parameters which characterize various force balance assumptions using the average distance to the nearest event, event rate, volume enclosed by the events, cumulative moment and energy from a group of events. One parameter, the Fracability index, approximates the ratio of viscous to elastic forcing and highlights differences in the response time of a rock to changes in stress. These dynamic parameters are applied to a database of more than 90 000 events in a shale-gas play in the Horn River Basin to characterize spatial-temporal variations in the source processes. In order to resolve these differences, a moving window, nearest neighbour approach was used. First, the center of mass of the local distribution was estimated for several source parameters. Then, a set of dynamic parameters, which characterize the response of the rock were estimated. These techniques reveal changes in seismic efficiency and apparent stress and often coincide with marked changes in the Fracability index and other dynamic statistical parameters. Utilizing these approaches allowed for the characterization of fluid injection related processes.

  13. Strong Ground Motion Simulation and Source Modeling of the April 1, 2006 Tai-Tung Earthquake Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2010-12-01

    The Tai-Tung earthquake (ML=6.2) occurred at the southeastern part of Taiwan on April 1, 2006. We examine the source model of this event using the observed seismograms by CWBSN at five stations surrounding the source area. An objective estimation method was used to obtain the parameters N and C which are needed for the empirical Green’s function method by Irikura (1986). This method is called “source spectral ratio fitting method” which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tai-Tung mainshock in 2006 was estimated by comparing the observed waveforms with synthetics using empirical Green’s function method. The size of the asperity is about 3.5 km length along the strike direction by 7.0 km width along the dip direction. The rupture started at the left-bottom of the asperity and extended radially to the right-upper direction.

  14. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.-C.; Lin, C.-Y.

    2012-04-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  15. Strong Ground Motion Simulation and Source Modeling of the December 16, 1993 Tapu Earthquake, Taiwan, Using Empirical Green's Function Method

    NASA Astrophysics Data System (ADS)

    Huang, H.; Lin, C.

    2012-12-01

    The Tapu earthquake (ML 5.7) occurred at the southwestern part of Taiwan on December 16, 1993. We examine the source model of this event using the observed seismograms by CWBSN at eight stations surrounding the source area. An objective estimation method is used to obtain the parameters N and C which are needed for the empirical Green's function method by Irikura (1986). This method is called "source spectral ratio fitting method" which gives estimate of seismic moment ratio between a large and a small event and their corner frequencies by fitting the observed source spectral ratio with the ratio of source spectra which obeys the model (Miyake et al., 1999). This method has an advantage of removing site effects in evaluating the parameters. The best source model of the Tapu mainshock in 1993 is estimated by comparing the observed waveforms with the synthetic ones using empirical Green's function method. The size of the asperity is about 2.1 km length along the strike direction by 1.5 km width along the dip direction. The rupture started at the right-bottom of the asperity and extended radially to the left-upper direction.

  16. Optimization of Transmit Parameters in Cardiac Strain Imaging With Full and Partial Aperture Coherent Compounding.

    PubMed

    Sayseng, Vincent; Grondin, Julien; Konofagou, Elisa E

    2018-05-01

    Coherent compounding methods using the full or partial transmit aperture have been investigated as a possible means of increasing strain measurement accuracy in cardiac strain imaging; however, the optimal transmit parameters in either compounding approach have yet to be determined. The relationship between strain estimation accuracy and transmit parameters-specifically the subaperture, angular aperture, tilt angle, number of virtual sources, and frame rate-in partial aperture (subaperture compounding) and full aperture (steered compounding) fundamental mode cardiac imaging was thus investigated and compared. Field II simulation of a 3-D cylindrical annulus undergoing deformation and twist was developed to evaluate accuracy of 2-D strain estimation in cross-sectional views. The tradeoff between frame rate and number of virtual sources was then investigated via transthoracic imaging in the parasternal short-axis view of five healthy human subjects, using the strain filter to quantify estimation precision. Finally, the optimized subaperture compounding sequence (25-element subperture, 90° angular aperture, 10 virtual sources, 300-Hz frame rate) was compared to the optimized steered compounding sequence (60° angular aperture, 15° tilt, 10 virtual sources, 300-Hz frame rate) via transthoracic imaging of five healthy subjects. Both approaches were determined to estimate cumulative radial strain with statistically equivalent precision (subaperture compounding E(SNRe %) = 3.56, and steered compounding E(SNRe %) = 4.26).

  17. Coda Q Attenuation and Source Parameters Analysis in North East India Using Local Earthquakes

    NASA Astrophysics Data System (ADS)

    Mohapatra, A. K.; Mohanty, W. K.; Earthquake Seismology

    2010-12-01

    Alok Kumar Mohapatra1* and William Kumar Mohanty1 *Corresponding author: alokgpiitkgp@gmail.com 1Department of Geology and Geophysics, Indian Institute of Technology, Kharagpur, West Bengal, India. Pin-721302 ABSTRACT In the present study, the quality factor of coda waves (Qc) and the source parameters has been estimated for the Northeastern India, using the digital data of ten local earthquakes from April 2001 to November 2002. Earthquakes with magnitude range from 3.8 to 4.9 have been taken into account. The time domain coda decay method of a single back scattering model is used to calculate frequency dependent values of Coda Q (Qc) where as, the source parameters like seismic moment(Mo), stress drop, source radius(r), radiant energy(Wo),and strain drop are estimated using displacement amplitude spectrum of body wave using Brune's model. The earthquakes with magnitude range 3.8 to 4.9 have been used for estimation Qc at six central frequencies 1.5 Hz, 3.0 Hz, 6.0 Hz, 9.0 Hz, 12.0 Hz, and 18.0 Hz. In the present work, the Qc value of local earthquakes are estimated to understand the attenuation characteristic, source parameters and tectonic activity of the region. Based on a criteria of homogeneity in the geological characteristics and the constrains imposed by the distribution of available events the study region has been classified into three zones such as the Tibetan Plateau Zone (TPZ), Bengal Alluvium and Arakan-Yuma Zone (BAZ), Shillong Plateau Zone (SPZ). It follows the power law Qc= Qo (f/fo)n where, Qo is the quality factor at the reference frequency (1Hz) fo and n is the frequency parameter which varies from region to region. The mean values of Qc reveals a dependence on frequency, varying from 292.9 at 1.5 Hz to 4880.1 at 18 Hz. Average frequency dependent relationship Qc values obtained of the Northeastern India is 198 f 1.035, while this relationship varies from the region to region such as, Tibetan Plateau Zone (TPZ): Qc= 226 f 1.11, Bengal Alluvium and Arakan-Yuma Zone (BAZ) : Qc= 301 f 0.87, Shillong Plateau Zone (SPZ): Qc=126 fo 0.85. It indicates Northeastern India is seismically active but comparing of all zones in the study region the Shillong Plateau Zone (SPZ): Qc= 126 f 0.85 is seismically most active. Where as the Bengal Alluvium and Arakan-Yuma Zone (BAZ) are less active and out of three the Tibetan Plateau Zone (TPZ)is intermediate active. This study may be useful for the seismic hazard assessment. The estimated seismic moments (Mo), range from 5.98×1020 to 3.88×1023 dyne-cm. The source radii(r) are confined between 152 to 1750 meter, the stress drop ranges between 0.0003×103 bar to 1.04×103 bar, the average radiant energy is 82.57×1018 ergs and the strain drop for the earthquake ranges from 0.00602×10-9 to 2.48×10-9 respectively. The estimated stress drop values for NE India depicts scattered nature of the larger seismic moment value whereas, they show a more systematic nature for smaller seismic moment values. The estimated source parameters are in agreement to previous works in this type of tectonic set up. Key words: Coda wave, Seismic source parameters, Lapse time, single back scattering model, Brune's model, Stress drop and North East India.

  18. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  19. Using Satellite Observations to Evaluate the AeroCOM Volcanic Emissions Inventory and the Dispersal of Volcanic SO2 Clouds in MERRA

    NASA Technical Reports Server (NTRS)

    Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter

    2015-01-01

    Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.

  20. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  1. Gravitational waves: search results, data analysis and parameter estimation: Amaldi 10 Parallel session C2.

    PubMed

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michał; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi; Robinet, Florent; Schmidt, Patricia; Smith, Rory; Veitch, John; Wade, Madeline; Aoudia, Sofiane; Bose, Sukanta; Calderon Bustillo, Juan; Canizares, Priscilla; Capano, Colin; Clark, James; Colla, Alberto; Cuoco, Elena; Da Silva Costa, Carlos; Dal Canton, Tito; Evangelista, Edgar; Goetz, Evan; Gupta, Anuradha; Hannam, Mark; Keitel, David; Lackey, Benjamin; Logue, Joshua; Mohapatra, Satyanarayan; Piergiovanni, Francesco; Privitera, Stephen; Prix, Reinhard; Pürrer, Michael; Re, Virginia; Serafinelli, Roberto; Wade, Leslie; Wen, Linqing; Wette, Karl; Whelan, John; Palomba, C; Prodi, G

    The Amaldi 10 Parallel Session C2 on gravitational wave (GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  2. Gravitational Waves: Search Results, Data Analysis and Parameter Estimation. Amaldi 10 Parallel Session C2

    NASA Technical Reports Server (NTRS)

    Astone, Pia; Weinstein, Alan; Agathos, Michalis; Bejger, Michal; Christensen, Nelson; Dent, Thomas; Graff, Philip; Klimenko, Sergey; Mazzolo, Giulio; Nishizawa, Atsushi

    2015-01-01

    The Amaldi 10 Parallel Session C2 on gravitational wave(GW) search results, data analysis and parameter estimation included three lively sessions of lectures by 13 presenters, and 34 posters. The talks and posters covered a huge range of material, including results and analysis techniques for ground-based GW detectors, targeting anticipated signals from different astrophysical sources: compact binary inspiral, merger and ringdown; GW bursts from intermediate mass binary black hole mergers, cosmic string cusps, core-collapse supernovae, and other unmodeled sources; continuous waves from spinning neutron stars; and a stochastic GW background. There was considerable emphasis on Bayesian techniques for estimating the parameters of coalescing compact binary systems from the gravitational waveforms extracted from the data from the advanced detector network. This included methods to distinguish deviations of the signals from what is expected in the context of General Relativity.

  3. Using Indirect Turbulence Measurements for Real-Time Parameter Estimation in Turbulent Air

    NASA Technical Reports Server (NTRS)

    Martos, Borja; Morelli, Eugene A.

    2012-01-01

    The use of indirect turbulence measurements for real-time estimation of parameters in a linear longitudinal dynamics model in atmospheric turbulence was studied. It is shown that measuring the atmospheric turbulence makes it possible to treat the turbulence as a measured explanatory variable in the parameter estimation problem. Commercial off-the-shelf sensors were researched and evaluated, then compared to air data booms. Sources of colored noise in the explanatory variables resulting from typical turbulence measurement techniques were identified and studied. A major source of colored noise in the explanatory variables was identified as frequency dependent upwash and time delay. The resulting upwash and time delay corrections were analyzed and compared to previous time shift dynamic modeling research. Simulation data as well as flight test data in atmospheric turbulence were used to verify the time delay behavior. Recommendations are given for follow on flight research and instrumentation.

  4. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  5. Getting Astrophysical Information from LISA Data

    NASA Technical Reports Server (NTRS)

    Stebbins, R. T.; Bender, P. L.; Folkner, W. M.

    1997-01-01

    Gravitational wave signals from a large number of astrophysical sources will be present in the LISA data. Information about as many sources as possible must be estimated from time series of strain measurements. Several types of signals are expected to be present: simple periodic signals from relatively stable binary systems, chirped signals from coalescing binary systems, complex waveforms from highly relativistic binary systems, stochastic backgrounds from galactic and extragalactic binary systems and possibly stochastic backgrounds from the early Universe. The orbital motion of the LISA antenna will modulate the phase and amplitude of all these signals, except the isotropic backgrounds and thereby give information on the directions of sources. Here we describe a candidate process for disentangling the gravitational wave signals and estimating the relevant astrophysical parameters from one year of LISA data. Nearly all of the sources will be identified by searching with templates based on source parameters and directions.

  6. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  8. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices

    PubMed Central

    Westgate, Philip M.

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator. PMID:27818539

  9. A covariance correction that accounts for correlation estimation to improve finite-sample inference with generalized estimating equations: A study on its applicability with structured correlation matrices.

    PubMed

    Westgate, Philip M

    2016-01-01

    When generalized estimating equations (GEE) incorporate an unstructured working correlation matrix, the variances of regression parameter estimates can inflate due to the estimation of the correlation parameters. In previous work, an approximation for this inflation that results in a corrected version of the sandwich formula for the covariance matrix of regression parameter estimates was derived. Use of this correction for correlation structure selection also reduces the over-selection of the unstructured working correlation matrix. In this manuscript, we conduct a simulation study to demonstrate that an increase in variances of regression parameter estimates can occur when GEE incorporates structured working correlation matrices as well. Correspondingly, we show the ability of the corrected version of the sandwich formula to improve the validity of inference and correlation structure selection. We also study the relative influences of two popular corrections to a different source of bias in the empirical sandwich covariance estimator.

  10. Estimating winter wheat phenological parameters: Implications for crop modeling

    USDA-ARS?s Scientific Manuscript database

    Crop parameters, such as the timing of developmental events, are critical for accurate simulation results in crop simulation models, yet uncertainty often exists in determining the parameters. Factors contributing to the uncertainty include: a) sources of variation within a plant (i.e., within diffe...

  11. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  12. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  13. Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, C.; Casentini, J.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gaebel, S.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Johnson-McDaniel, N. K.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; K, Haris; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J. V.; Vano-Vinuales, A.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Brügmann, B.; Campanelli, M.; Chu, T.; Clark, M.; Haas, R.; Hemberger, D.; Hinder, I.; Kidder, L. E.; Kinsey, M.; Laguna, P.; Ossokine, S.; Pan, Y.; Röver, C.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-10-01

    This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35-3+5 M⊙ and 3 0-4+3 M⊙ (where errors correspond to 90% symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate <0.65 and a secondary spin estimate <0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.

  14. Medical costs and quality-adjusted life years associated with smoking: a systematic review.

    PubMed

    Feirman, Shari P; Glasser, Allison M; Teplitskaya, Lyubov; Holtgrave, David R; Abrams, David B; Niaura, Raymond S; Villanti, Andrea C

    2016-07-27

    Estimated medical costs ("T") and QALYs ("Q") associated with smoking are frequently used in cost-utility analyses of tobacco control interventions. The goal of this study was to understand how researchers have addressed the methodological challenges involved in estimating these parameters. Data were collected as part of a systematic review of tobacco modeling studies. We searched five electronic databases on July 1, 2013 with no date restrictions and synthesized studies qualitatively. Studies were eligible for the current analysis if they were U.S.-based, provided an estimate for Q, and used a societal perspective and lifetime analytic horizon to estimate T. We identified common methods and frequently cited sources used to obtain these estimates. Across all 18 studies included in this review, 50 % cited a 1992 source to estimate the medical costs associated with smoking and 56 % cited a 1996 study to derive the estimate for QALYs saved by quitting or preventing smoking. Approaches for estimating T varied dramatically among the studies included in this review. T was valued as a positive number, negative number and $0; five studies did not include estimates for T in their analyses. The most commonly cited source for Q based its estimate on the Health Utilities Index (HUI). Several papers also cited sources that based their estimates for Q on the Quality of Well-Being Scale and the EuroQol five dimensions questionnaire (EQ-5D). Current estimates of the lifetime medical care costs and the QALYs associated with smoking are dated and do not reflect the latest evidence on the health effects of smoking, nor the current costs and benefits of smoking cessation and prevention. Given these limitations, we recommend that researchers conducting economic evaluations of tobacco control interventions perform extensive sensitivity analyses around these parameter estimates.

  15. A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jin; Yu, Yaming; Van Dyk, David A.

    2014-10-20

    Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less

  16. Distance biases in the estimation of the physical properties of Hi-GAL compact sources - I. Clump properties and the identification of high-mass star-forming candidates

    NASA Astrophysics Data System (ADS)

    Baldeschi, Adriano; Elia, D.; Molinari, S.; Pezzuto, S.; Schisano, E.; Gatti, M.; Serra, A.; Merello, M.; Benedettini, M.; Di Giorgio, A. M.; Liu, J. S.

    2017-04-01

    The degradation of spatial resolution in star-forming regions, observed at large distances (d ≳ 1 kpc) with Herschel, can lead to estimates of the physical parameters of the detected compact sources (clumps), which do not necessarily mirror the properties of the original population of cores. This paper aims at quantifying the bias introduced in the estimation of these parameters by the distance effect. To do so, we consider Herschel maps of nearby star-forming regions taken from the Herschel Gould Belt survey, and simulate the effect of increased distance to understand what amount of information is lost when a distant star-forming region is observed with Herschel resolution. In the maps displaced to different distances we extract compact sources, and we derive their physical parameters as if they were original Herschel infrared Galactic Plane Survey maps of the extracted source samples. In this way, we are able to discuss how the main physical properties change with distance. In particular, we discuss the ability of clumps to form massive stars: we estimate the fraction of distant sources that are classified as high-mass stars-forming objects due to their position in the mass versus radius diagram, that are only 'false positives'. We also give a threshold for high-mass star formation M>1282 (r/ [pc])^{1.42} M_{⊙}. In conclusion, this paper provides the astronomer dealing with Herschel maps of distant star-forming regions with a set of prescriptions to partially recover the character of the core population in unresolved clumps.

  17. Bayesian estimation of a source term of radiation release with approximately known nuclide ratios

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek

    2016-04-01

    We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).

  18. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  19. A novel integrated approach for the hazardous radioactive dust source terms estimation in future nuclear fusion power plants.

    PubMed

    Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P

    2016-10-01

    An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.

  20. Partitioning the Uncertainty in Estimates of Mean Basal Area Obtained from 10-year Diameter Growth Model Predictions

    Treesearch

    Ronald E. McRoberts

    2005-01-01

    Uncertainty in model-based predictions of individual tree diameter growth is attributed to three sources: measurement error for predictor variables, residual variability around model predictions, and uncertainty in model parameter estimates. Monte Carlo simulations are used to propagate the uncertainty from the three sources through a set of diameter growth models to...

  1. An Applied Framework for Incorporating Multiple Sources of Uncertainty in Fisheries Stock Assessments.

    PubMed

    Scott, Finlay; Jardim, Ernesto; Millar, Colin P; Cerviño, Santiago

    2016-01-01

    Estimating fish stock status is very challenging given the many sources and high levels of uncertainty surrounding the biological processes (e.g. natural variability in the demographic rates), model selection (e.g. choosing growth or stock assessment models) and parameter estimation. Incorporating multiple sources of uncertainty in a stock assessment allows advice to better account for the risks associated with proposed management options, promoting decisions that are more robust to such uncertainty. However, a typical assessment only reports the model fit and variance of estimated parameters, thereby underreporting the overall uncertainty. Additionally, although multiple candidate models may be considered, only one is selected as the 'best' result, effectively rejecting the plausible assumptions behind the other models. We present an applied framework to integrate multiple sources of uncertainty in the stock assessment process. The first step is the generation and conditioning of a suite of stock assessment models that contain different assumptions about the stock and the fishery. The second step is the estimation of parameters, including fitting of the stock assessment models. The final step integrates across all of the results to reconcile the multi-model outcome. The framework is flexible enough to be tailored to particular stocks and fisheries and can draw on information from multiple sources to implement a broad variety of assumptions, making it applicable to stocks with varying levels of data availability The Iberian hake stock in International Council for the Exploration of the Sea (ICES) Divisions VIIIc and IXa is used to demonstrate the framework, starting from length-based stock and indices data. Process and model uncertainty are considered through the growth, natural mortality, fishing mortality, survey catchability and stock-recruitment relationship. Estimation uncertainty is included as part of the fitting process. Simple model averaging is used to integrate across the results and produce a single assessment that considers the multiple sources of uncertainty.

  2. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  3. Geometric Characterization of Multi-Axis Multi-Pinhole SPECT

    PubMed Central

    DiFilippo, Frank P.

    2008-01-01

    A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574

  4. 40 CFR 98.235 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. A complete record of all estimated and/or measured parameters used in... sources as soon as possible, including in the subsequent calendar year if missing data are not discovered...

  5. 40 CFR 98.235 - Procedures for estimating missing data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. A complete record of all estimated and/or measured parameters used in... sources as soon as possible, including in the subsequent calendar year if missing data are not discovered...

  6. 40 CFR 98.235 - Procedures for estimating missing data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. A complete record of all estimated and/or measured parameters used in... sources as soon as possible, including in the subsequent calendar year if missing data are not discovered...

  7. 40 CFR 98.235 - Procedures for estimating missing data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... Procedures for estimating missing data. A complete record of all estimated and/or measured parameters used in... sources as soon as possible, including in the subsequent calendar year if missing data are not discovered...

  8. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  9. PSO (Particle Swarm Optimization) for Interpretation of Magnetic Anomalies Caused by Simple Geometrical Structures

    NASA Astrophysics Data System (ADS)

    Essa, Khalid S.; Elhussein, Mahmoud

    2018-04-01

    A new efficient approach to estimate parameters that controlled the source dimensions from magnetic anomaly profile data in light of PSO algorithm (particle swarm optimization) has been presented. The PSO algorithm has been connected in interpreting the magnetic anomaly profiles data onto a new formula for isolated sources embedded in the subsurface. The model parameters deciphered here are the depth of the body, the amplitude coefficient, the angle of effective magnetization, the shape factor and the horizontal coordinates of the source. The model parameters evaluated by the present technique, generally the depth of the covered structures were observed to be in astounding concurrence with the real parameters. The root mean square (RMS) error is considered as a criterion in estimating the misfit between the observed and computed anomalies. Inversion of noise-free synthetic data, noisy synthetic data which contains different levels of random noise (5, 10, 15 and 20%) as well as multiple structures and in additional two real-field data from USA and Egypt exhibits the viability of the approach. Thus, the final results of the different parameters are matched with those given in the published literature and from geologic results.

  10. Adaptive Estimation and Heuristic Optimization of Nonlinear Spacecraft Attitude Dynamics

    DTIC Science & Technology

    2016-09-15

    Algorithm GPS Global Positioning System HOUF Higher Order Unscented Filter IC initial conditions IMM Interacting Multiple Model IMU Inertial Measurement Unit ...sources ranging from inertial measurement units to star sensors are used to construct observations for attitude estimation algorithms. The sensor...parameters. A single vector measurement will provide two independent parameters, as a unit vector constraint removes a DOF making the problem underdetermined

  11. Modified method for estimating petroleum source-rock potential using wireline logs, with application to the Kingak Shale, Alaska North Slope

    USGS Publications Warehouse

    Rouse, William A.; Houseknecht, David W.

    2016-02-11

    In 2012, the U.S. Geological Survey completed an assessment of undiscovered, technically recoverable oil and gas resources in three source rocks of the Alaska North Slope, including the lower part of the Jurassic to Lower Cretaceous Kingak Shale. In order to identify organic shale potential in the absence of a robust geochemical dataset from the lower Kingak Shale, we introduce two quantitative parameters, $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$, estimated from wireline logs from exploration wells and based in part on the commonly used delta-log resistivity ($\\Delta \\text{ }log\\text{ }R$) technique. Calculation of $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ is intended to produce objective parameters that may be proportional to the quality and volume, respectively, of potential source rocks penetrated by a well and can be used as mapping parameters to convey the spatial distribution of source-rock potential. Both the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters show increased source-rock potential from north to south across the North Slope, with the largest values at the toe of clinoforms in the lower Kingak Shale. Because thermal maturity is not considered in the calculation of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$, total organic carbon values for individual wells cannot be calculated on the basis of $\\Delta DT_\\bar{x}$ or $\\Delta DT_z$ alone. Therefore, the $\\Delta DT_\\bar{x}$ and $\\Delta DT_z$ mapping parameters should be viewed as first-step reconnaissance tools for identifying source-rock potential.

  12. Accurate estimation of seismic source parameters of induced seismicity by a combined approach of generalized inversion and genetic algorithm: Application to The Geysers geothermal area, California

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Oth, A.; Parolai, S.; Bindi, D.; De Landro, G.; Amoroso, O.

    2017-05-01

    The accurate determination of stress drop, seismic efficiency, and how source parameters scale with earthquake size is an important issue for seismic hazard assessment of induced seismicity. We propose an improved nonparametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for attenuation and site contributions. Then, the retrieved source spectra are inverted by a nonlinear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (Mw 2-3.8) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations, more than 17.000 velocity records). We find a nonself-similar behavior, empirical source spectra that require an ωγ source model with γ > 2 to be well fit and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes and that the proportion of high-frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping faults in the fluid pressure diffusion.

  13. Study of the Seismic Source in the Jalisco Block

    NASA Astrophysics Data System (ADS)

    Gutierrez, Q. J.; Escudero, C. R.; Nunez-Cornu, F. J.; Ochoa, J.; Cruz, L. H.

    2013-05-01

    The direct measure of the earthquake fault dimension and the orientation, as well as the direction of slip represent a complicated task nevertheless a better approach is using the seismic waves spectrum and the direction of P-first motions observed at each station. With these methods we can estimate the seismic source parameters like the stress drop, the corner frequency which is linked to the rupture duration time, the fault radius (For the particular case of a circular fault), the rupture area, the seismic moment , the moment magnitude and the focal mechanisms. The study area where were estimated the source parameters comprises the complex tectonic configuration of Jalisco block, that is delimited by the mesoamerican trench at the west, the Colima grabben to the south, and the Tepic-Zacoalco to the north The data was recorded by the MARS network (Mapping the Riviera Subduction Zone) and the RESAJ network. MARS had 50 stations and settled in the Jalisco block; for a period of time, of January 1, 2006 until June, 2007, the magnitude range of these was between 3 to 6.5 MB. RESJAL has 10 stations and is within the state of Jalisco, began to record since October 2011 and continues to record. Before of apply the method we firs remove the trend, the mean and the instrument response and we corrected for attenuation; then manually chosen the S wave, the multitaper method was used to obtain the spectrum of this wave and so estimate the corner frequency and the spectra level. We substitute the obtained in the equations of the Brune model to calculate the source parameters. To calculate focal mechanisms HASH software was used which determines the most likely mechanism. The main propose of this study is estimate earthquake seismic source parameters with the objective of that helps to understand the physics of earthquake rupture mechanism in the area.

  14. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  15. A study of Guptkashi, Uttarakhand earthquake of 6 February 2017 ( M w 5.3) in the Himalayan arc and implications for ground motion estimation

    NASA Astrophysics Data System (ADS)

    Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati

    2018-05-01

    The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth ( H = 19 km), the seismic moment ( M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism ( φ = 280°, δ = 14°, λ = 84°), the source radius ( a = 1.3 km), and the static stress drop (Δ σ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q( f) = 500 f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δ σ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.

  16. A study of Guptkashi, Uttarakhand earthquake of 6 February 2017 (M w 5.3) in the Himalayan arc and implications for ground motion estimation

    NASA Astrophysics Data System (ADS)

    Srinagesh, Davuluri; Singh, Shri Krishna; Suresh, Gaddale; Srinivas, Dakuri; Pérez-Campos, Xyoli; Suresh, Gudapati

    2018-02-01

    The 2017 Guptkashi earthquake occurred in a segment of the Himalayan arc with high potential for a strong earthquake in the near future. In this context, a careful analysis of the earthquake is important as it may shed light on source and ground motion characteristics during future earthquakes. Using the earthquake recording on a single broadband strong-motion seismograph installed at the epicenter, we estimate the earthquake's location (30.546° N, 79.063° E), depth (H = 19 km), the seismic moment (M 0 = 1.12×1017 Nm, M w 5.3), the focal mechanism (φ = 280°, δ = 14°, λ = 84°), the source radius (a = 1.3 km), and the static stress drop (Δσ s 22 MPa). The event occurred just above the Main Himalayan Thrust. S-wave spectra of the earthquake at hard sites in the arc are well approximated (assuming ω -2 source model) by attenuation parameters Q(f) = 500f 0.9, κ = 0.04 s, and f max = infinite, and a stress drop of Δσ = 70 MPa. Observed and computed peak ground motions, using stochastic method along with parameters inferred from spectral analysis, agree well with each other. These attenuation parameters are also reasonable for the observed spectra and/or peak ground motion parameters in the arc at distances ≤ 200 km during five other earthquakes in the region (4.6 ≤ M w ≤ 6.9). The estimated stress drop of the six events ranges from 20 to 120 MPa. Our analysis suggests that attenuation parameters given above may be used for ground motion estimation at hard sites in the Himalayan arc via the stochastic method.

  17. Ambiguity Resolution for Phase-Based 3-D Source Localization under Fixed Uniform Circular Array.

    PubMed

    Chen, Xin; Liu, Zhen; Wei, Xizhang

    2017-05-11

    Under fixed uniform circular array (UCA), 3-D parameter estimation of a source whose half-wavelength is smaller than the array aperture would suffer from a serious phase ambiguity problem, which also appears in a recently proposed phase-based algorithm. In this paper, by using the centro-symmetry of UCA with an even number of sensors, the source's angles and range can be decoupled and a novel algorithm named subarray grouping and ambiguity searching (SGAS) is addressed to resolve angle ambiguity. In the SGAS algorithm, each subarray formed by two couples of centro-symmetry sensors can obtain a batch of results under different ambiguities, and by searching the nearest value among subarrays, which is always corresponding to correct ambiguity, rough angle estimation with no ambiguity is realized. Then, the unambiguous angles are employed to resolve phase ambiguity in a phase-based 3-D parameter estimation algorithm, and the source's range, as well as more precise angles, can be achieved. Moreover, to improve the practical performance of SGAS, the optimal structure of subarrays and subarray selection criteria are further investigated. Simulation results demonstrate the satisfying performance of the proposed method in 3-D source localization.

  18. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  19. Solution of the weighted symmetric similarity transformations based on quaternions

    NASA Astrophysics Data System (ADS)

    Mercan, H.; Akyilmaz, O.; Aydin, C.

    2017-12-01

    A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.

  20. A line-source method for aligning on-board and other pinhole SPECT systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-15

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less

  1. A line-source method for aligning on-board and other pinhole SPECT systems

    PubMed Central

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-01-01

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537

  2. A line-source method for aligning on-board and other pinhole SPECT systems.

    PubMed

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-01

    In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.

  3. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less

  4. Turboprop and rotary-wing aircraft flight parameter estimation using both narrow-band and broadband passive acoustic signal-processing methods.

    PubMed

    Ferguson, B G; Lo, K W

    2000-10-01

    Flight parameter estimation methods for an airborne acoustic source can be divided into two categories, depending on whether the narrow-band lines or the broadband component of the received signal spectrum is processed to estimate the flight parameters. This paper provides a common framework for the formulation and test of two flight parameter estimation methods: one narrow band, the other broadband. The performances of the two methods are evaluated by applying them to the same acoustic data set, which is recorded by a planar array of passive acoustic sensors during multiple transits of a turboprop fixed-wing aircraft and two types of rotary-wing aircraft. The narrow-band method, which is based on a kinematic model that assumes the source travels in a straight line at constant speed and altitude, requires time-frequency analysis of the acoustic signal received by a single sensor during each aircraft transit. The broadband method is based on the same kinematic model, but requires observing the temporal variation of the differential time of arrival of the acoustic signal at each pair of sensors that comprises the planar array. Generalized cross correlation of each pair of sensor outputs using a cross-spectral phase transform prefilter provides instantaneous estimates of the differential times of arrival of the signal as the acoustic wavefront traverses the array.

  5. Estimation and correction of different flavors of surface observation biases in ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Lorente-Plazas, Raquel; Hacker, Josua P.; Collins, Nancy; Lee, Jared A.

    2017-04-01

    The impact of assimilating surface observations has been shown in several publications, for improving weather prediction inside of the boundary layer as well as the flow aloft. However, the assimilation of surface observations is often far from optimal due to the presence of both model and observation biases. The sources of these biases can be diverse: an instrumental offset, errors associated to the comparison of point-based observations and grid-cell average, etc. To overcome this challenge, a method was developed using the ensemble Kalman filter. The approach consists on representing each observation bias as a parameter. These bias parameters are added to the forward operator and they extend the state vector. As opposed to the observation bias estimation approaches most common in operational systems (e.g. for satellite radiances), the state vector and parameters are simultaneously updated by applying the Kalman filter equations to the augmented state. The method to estimate and correct the observation bias is evaluated using observing system simulation experiments (OSSEs) with the Weather Research and Forecasting (WRF) model. OSSEs are constructed for the conventional observation network including radiosondes, aircraft observations, atmospheric motion vectors, and surface observations. Three different kinds of biases are added to 2-meter temperature for synthetic METARs. From the simplest to more sophisticated, imposed biases are: (1) a spatially invariant bias, (2) a spatially varying bias proportional to topographic height differences between the model and the observations, and (3) bias that is proportional to the temperature. The target region characterized by complex terrain is the western U.S. on a domain with 30-km grid spacing. Observations are assimilated every 3 hours using an 80-member ensemble during September 2012. Results demonstrate that the approach is able to estimate and correct the bias when it is spatially invariant (experiment 1). More complex bias structure in experiments (2) and (3) are more difficult to estimate, but still possible. Estimated the parameter in experiments with unbiased observations results in spatial and temporal parameter variability about zero, and establishes a threshold on the accuracy of the parameter in further experiments. When the observations are biased, the mean parameter value is close to the true bias, but temporal and spatial variability in the parameter estimates is similar to the parameters used when estimating a zero bias in the observations. The distributions are related to other errors in the forecasts, indicating that the parameters are absorbing some of the forecast error from other sources. In this presentation we elucidate the reasons for the resulting parameter estimates, and their variability.

  6. A study on the seismic source parameters for earthquakes occurring in the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Rhee, H. M.; Sheen, D. H.

    2015-12-01

    We investigated the characteristics of the seismic source parameters of the southern part of the Korean Peninsula for the 599 events with ML≥1.7 from 2001 to 2014. A large number of data are carefully selected by visual inspection in the time and frequency domains. The data set consist of 5,093 S-wave trains on three-component seismograms recorded at broadband seismograph stations which have been operating by the Korea Meteorological Administration and the Korea Institute of Geoscience and Mineral Resources. The corner frequency, stress drop, and moment magnitude of each event were measured by using the modified method of Jo and Baag (2001), based on the methods of Snoke (1987) and Andrews (1986). We found that this method could improve the stability of the estimation of source parameters from S-wave displacement spectrum by an iterative process. Then, we compared the source parameters with those obtained from previous studies and investigated the source scaling relationship and the regional variations of source parameters in the southern Korean Peninsula.

  7. Improved Analysis of GW150914 Using a Fully Spin-Precessing Waveform Model

    NASA Technical Reports Server (NTRS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Camp, J. B.; hide

    2016-01-01

    This paper presents updated estimates of source parameters for GW150914, a binary black-hole coalescence event detected by the Laser Interferometer Gravitational-wave Observatory (LIGO) in 2015 [Abbott et al. Phys. Rev. Lett. 116, 061102 (2016).]. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] presented parameter estimation of the source using a 13-dimensional, phenomenological precessing-spin model (precessing IMRPhenom) and an 11-dimensional nonprecessing effective-one-body (EOB) model calibrated to numerical-relativity simulations, which forces spin alignment (nonprecessing EOBNR). Here, we present new results that include a 15-dimensional precessing-spin waveform model (precessing EOBNR) developed within the EOB formalism. We find good agreement with the parameters estimated previously [Abbott et al. Phys. Rev. Lett. 116, 241102 (2016).], and we quote updated component masses of 35(+5)(-3) solar M; and 30(+3)(-4) solar M; (where errors correspond to 90 symmetric credible intervals). We also present slightly tighter constraints on the dimensionless spin magnitudes of the two black holes, with a primary spin estimate is less than 0.65 and a secondary spin estimate is less than 0.75 at 90% probability. Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016).] estimated the systematic parameter-extraction errors due to waveform-model uncertainty by combining the posterior probability densities of precessing IMRPhenom and nonprecessing EOBNR. Here, we find that the two precessing-spin models are in closer agreement, suggesting that these systematic errors are smaller than previously quoted.

  8. Rapid earthquake hazard and loss assessment for Euro-Mediterranean region

    NASA Astrophysics Data System (ADS)

    Erdik, Mustafa; Sesetyan, Karin; Demircioglu, Mine; Hancilar, Ufuk; Zulfikar, Can; Cakti, Eser; Kamer, Yaver; Yenidogan, Cem; Tuzun, Cuneyt; Cagnan, Zehra; Harmandar, Ebru

    2010-10-01

    The almost-real time estimation of ground shaking and losses after a major earthquake in the Euro-Mediterranean region was performed in the framework of the Joint Research Activity 3 (JRA-3) component of the EU FP6 Project entitled "Network of Research Infra-structures for European Seismology, NERIES". This project consists of finding the most likely location of the earthquake source by estimating the fault rupture parameters on the basis of rapid inversion of data from on-line regional broadband stations. It also includes an estimation of the spatial distribution of selected site-specific ground motion parameters at engineering bedrock through region-specific ground motion prediction equations (GMPEs) or physical simulation of ground motion. By using the Earthquake Loss Estimation Routine (ELER) software, the multi-level methodology developed for real time estimation of losses is capable of incorporating regional variability and sources of uncertainty stemming from GMPEs, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships.

  9. Noise source and reactor stability estimation in a boiling water reactor using a multivariate autoregressive model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanemoto, S.; Andoh, Y.; Sandoz, S.A.

    1984-10-01

    A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less

  10. Comparison of the Cut-and-Paste and Full Moment Tensor Methods for Estimating Earthquake Source Parameters

    NASA Astrophysics Data System (ADS)

    Templeton, D.; Rodgers, A.; Helmberger, D.; Dreger, D.

    2008-12-01

    Earthquake source parameters (seismic moment, focal mechanism and depth) are now routinely reported by various institutions and network operators. These parameters are important for seismotectonic and earthquake ground motion studies as well as calibration of moment magnitude scales and model-based earthquake-explosion discrimination. Source parameters are often estimated from long-period three- component waveforms at regional distances using waveform modeling techniques with Green's functions computed for an average plane-layered models. One widely used method is waveform inversion for the full moment tensor (Dreger and Helmberger, 1993). This method (TDMT) solves for the moment tensor elements by performing a linearized inversion in the time-domain that minimizes the difference between the observed and synthetic waveforms. Errors in the seismic velocity structure inevitably arise due to either differences in the true average plane-layered structure or laterally varying structure. The TDMT method can account for errors in the velocity model by applying a single time shift at each station to the observed waveforms to best match the synthetics. Another method for estimating source parameters is the Cut-and-Paste (CAP) method. This method breaks the three-component regional waveforms into five windows: vertical and radial component Pnl; vertical and radial component Rayleigh wave; and transverse component Love waves. The CAP method performs a grid search over double-couple mechanisms and allows the synthetic waveforms for each phase (Pnl, Rayleigh and Love) to shift in time to account for errors in the Green's functions. Different filtering and weighting of the Pnl segment relative to surface wave segments enhances sensitivity to source parameters, however, some bias may be introduced. This study will compare the TDMT and CAP methods in two different regions in order to better understand the advantages and limitations of each method. Firstly, we will consider the northeastern China/Korean Peninsula region where average plane-layered structure is well known and relatively laterally homogenous. Secondly, we will consider the Middle East where crustal and upper mantle structure is laterally heterogeneous due to recent and ongoing tectonism. If time allows we will investigate the efficacy of each method for retrieving source parameters from synthetic data generated using a three-dimensional model of seismic structure of the Middle East, where phase delays are known to arise from path-dependent structure.

  11. Transmission parameters estimated for Salmonella typhimurium in swine using susceptible-infectious-resistant models and a Bayesian approach

    PubMed Central

    2014-01-01

    Background Transmission models can aid understanding of disease dynamics and are useful in testing the efficiency of control measures. The aim of this study was to formulate an appropriate stochastic Susceptible-Infectious-Resistant/Carrier (SIR) model for Salmonella Typhimurium in pigs and thus estimate the transmission parameters between states. Results The transmission parameters were estimated using data from a longitudinal study of three Danish farrow-to-finish pig herds known to be infected. A Bayesian model framework was proposed, which comprised Binomial components for the transition from susceptible to infectious and from infectious to carrier; and a Poisson component for carrier to infectious. Cohort random effects were incorporated into these models to allow for unobserved cohort-specific variables as well as unobserved sources of transmission, thus enabling a more realistic estimation of the transmission parameters. In the case of the transition from susceptible to infectious, the cohort random effects were also time varying. The number of infectious pigs not detected by the parallel testing was treated as unknown, and the probability of non-detection was estimated using information about the sensitivity and specificity of the bacteriological and serological tests. The estimate of the transmission rate from susceptible to infectious was 0.33 [0.06, 1.52], from infectious to carrier was 0.18 [0.14, 0.23] and from carrier to infectious was 0.01 [0.0001, 0.04]. The estimate for the basic reproduction ration (R 0 ) was 1.91 [0.78, 5.24]. The probability of non-detection was estimated to be 0.18 [0.12, 0.25]. Conclusions The proposed framework for stochastic SIR models was successfully implemented to estimate transmission rate parameters for Salmonella Typhimurium in swine field data. R 0 was 1.91, implying that there was dissemination of the infection within pigs of the same cohort. There was significant temporal-cohort variability, especially at the susceptible to infectious stage. The model adequately fitted the data, allowing for both observed and unobserved sources of uncertainty (cohort effects, diagnostic test sensitivity), so leading to more reliable estimates of transmission parameters. PMID:24774444

  12. Relating stick-slip friction experiments to earthquake source parameters

    USGS Publications Warehouse

    McGarr, Arthur F.

    2012-01-01

    Analytical results for parameters, such as static stress drop, for stick-slip friction experiments, with arbitrary input parameters, can be determined by solving an energy-balance equation. These results can then be related to a given earthquake based on its seismic moment and the maximum slip within its rupture zone, assuming that the rupture process entails the same physics as stick-slip friction. This analysis yields overshoots and ratios of apparent stress to static stress drop of about 0.25. The inferred earthquake source parameters static stress drop, apparent stress, slip rate, and radiated energy are robust inasmuch as they are largely independent of the experimental parameters used in their estimation. Instead, these earthquake parameters depend on C, the ratio of maximum slip to the cube root of the seismic moment. C is controlled by the normal stress applied to the rupture plane and the difference between the static and dynamic coefficients of friction. Estimating yield stress and seismic efficiency using the same procedure is only possible when the actual static and dynamic coefficients of friction are known within the earthquake rupture zone.

  13. Source encoding in multi-parameter full waveform inversion

    NASA Astrophysics Data System (ADS)

    Matharu, Gian; Sacchi, Mauricio D.

    2018-04-01

    Source encoding techniques alleviate the computational burden of sequential-source full waveform inversion (FWI) by considering multiple sources simultaneously rather than independently. The reduced data volume requires fewer forward/adjoint simulations per non-linear iteration. Applications of source-encoded full waveform inversion (SEFWI) have thus far focused on monoparameter acoustic inversion. We extend SEFWI to the multi-parameter case with applications presented for elastic isotropic inversion. Estimating multiple parameters can be challenging as perturbations in different parameters can prompt similar responses in the data. We investigate the relationship between source encoding and parameter trade-off by examining the multi-parameter source-encoded Hessian. Probing of the Hessian demonstrates the convergence of the expected source-encoded Hessian, to that of conventional FWI. The convergence implies that the parameter trade-off in SEFWI is comparable to that observed in FWI. A series of synthetic inversions are conducted to establish the feasibility of source-encoded multi-parameter FWI. We demonstrate that SEFWI requires fewer overall simulations than FWI to achieve a target model error for a range of first-order optimization methods. An inversion for spatially inconsistent P - (α) and S-wave (β) velocity models, corroborates the expectation of comparable parameter trade-off in SEFWI and FWI. The final example demonstrates a shortcoming of SEFWI when confronted with time-windowing in data-driven inversion schemes. The limitation is a consequence of the implicit fixed-spread acquisition assumption in SEFWI. Alternative objective functions, namely the normalized cross-correlation and L1 waveform misfit, do not enable SEFWI to overcome this limitation.

  14. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  15. REVERBERATION AND PHOTOIONIZATION ESTIMATES OF THE BROAD-LINE REGION RADIUS IN LOW-z QUASARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negrete, C. Alenka; Dultzin, Deborah; Marziani, Paola

    2013-07-01

    Black hole mass estimation in quasars, especially at high redshift, involves the use of single-epoch spectra with signal-to-noise ratio and resolution that permit accurate measurement of the width of a broad line assumed to be a reliable virial estimator. Coupled with an estimate of the radius of the broad-line region (BLR) this yields the black hole mass M{sub BH}. The radius of the BLR may be inferred from an extrapolation of the correlation between source luminosity and reverberation-derived r{sub BLR} measures (the so-called Kaspi relation involving about 60 low-z sources). We are exploring a different method for estimating r{sub BLR}more » directly from inferred physical conditions in the BLR of each source. We report here on a comparison of r{sub BLR} estimates that come from our method and from reverberation mapping. Our ''photoionization'' method employs diagnostic line intensity ratios in the rest-frame range 1400-2000 A (Al III {lambda}1860/Si III] {lambda}1892, C IV {lambda}1549/Al III {lambda}1860) that enable derivation of the product of density and ionization parameter with the BLR distance derived from the definition of the ionization parameter. We find good agreement between our estimates of the density, ionization parameter, and r{sub BLR} and those from reverberation mapping. We suggest empirical corrections to improve the agreement between individual photoionization-derived r{sub BLR} values and those obtained from reverberation mapping. The results in this paper can be exploited to estimate M{sub BH} for large samples of high-z quasars using an appropriate virial broadening estimator. We show that the width of the UV intermediate emission lines are consistent with the width of H{beta}, thereby providing a reliable virial broadening estimator that can be measured in large samples of high-z quasars.« less

  16. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  17. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  18. Burden Calculator: a simple and open analytical tool for estimating the population burden of injuries.

    PubMed

    Bhalla, Kavi; Harrison, James E

    2016-04-01

    Burden of disease and injury methods can be used to summarise and compare the effects of conditions in terms of disability-adjusted life years (DALYs). Burden estimation methods are not inherently complex. However, as commonly implemented, the methods include complex modelling and estimation. To provide a simple and open-source software tool that allows estimation of incidence-DALYs due to injury, given data on incidence of deaths and non-fatal injuries. The tool includes a default set of estimation parameters, which can be replaced by users. The tool was written in Microsoft Excel. All calculations and values can be seen and altered by users. The parameter sets currently used in the tool are based on published sources. The tool is available without charge online at http://calculator.globalburdenofinjuries.org. To use the tool with the supplied parameter sets, users need to only paste a table of population and injury case data organised by age, sex and external cause of injury into a specified location in the tool. Estimated DALYs can be read or copied from tables and figures in another part of the tool. In some contexts, a simple and user-modifiable burden calculator may be preferable to undertaking a more complex study to estimate the burden of disease. The tool and the parameter sets required for its use can be improved by user innovation, by studies comparing DALYs estimates calculated in this way and in other ways, and by shared experience of its use. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Estimation of Dynamical Parameters in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark O.

    2004-01-01

    In this study a new technique is used to derive dynamical parameters out of atmospheric data sets. This technique, called the structure tensor technique, can be used to estimate dynamical parameters such as motion, source strengths, diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. The fundamental algorithm will be extended to the analysis of multi- channel (e.g. multi trace gas) image sequences and to provide solutions to the extended aperture problem. In this study sensitivity studies have been performed to determine the usability of this technique for data sets with different resolution in time and space and different dimensions.

  20. Post-stratified estimation of forest area and growing stock volume using lidar-based stratifications

    Treesearch

    Ronald E. McRoberts; Terje Gobakken; Erik Næsset

    2012-01-01

    National forest inventories report estimates of parameters related to forest area and growing stock volume for geographic areas ranging in size from municipalities to entire countries. Landsat imagery has been shown to be a source of auxiliary information that can be used with stratified estimation to increase the precision of estimates, although the increase is...

  1. An approach to source characterization of tremor signals associated with eruptions and lahars

    NASA Astrophysics Data System (ADS)

    Kumagai, Hiroyuki; Mothes, Patricia; Ruiz, Mario; Maeda, Yuta

    2015-11-01

    Tremor signals are observed in association with eruption activity and lahar descents. Reduced displacement ( D R) derived from tremor signals has been used to quantify tremor sources. However, tremor duration is not considered in D R, which makes it difficult to compare D R values estimated for different tremor episodes. We propose application of the amplitude source location (ASL) method to characterize the sources of tremor signals. We used this method to estimate the tremor source location and source amplitude from high-frequency (5-10 Hz) seismic amplitudes under the assumption of isotropic S-wave radiation. We considered the source amplitude to be the maximum value during tremor. We estimated the cumulative source amplitude ( I s) as the offset value of the time-integrated envelope of the vertical seismogram of tremor corrected for geometrical spreading and medium attenuation in the 5-10-Hz band. For eruption tremor signals, we also estimated the cumulative source pressure ( I p) from an infrasonic envelope waveform corrected for geometrical spreading. We studied these parameters of tremor signals associated with eruptions and lahars and explosion events at Tungurahua volcano, Ecuador. We identified two types of eruption tremor at Tungurahua: noise-like inharmonic waveforms and harmonic oscillatory signals. We found that I s increased linearly with increasing source amplitude for lahar tremor signals and explosion events, but I s increased exponentially with increasing source amplitude for inharmonic eruption tremor signals. The source characteristics of harmonic eruption tremor signals differed from those of inharmonic tremor signals. We found a linear relation between I s and I p for both explosion events and eruption tremor. Because I p may be proportional to the total mass involved during an eruption episode, this linear relation suggests that I s may be useful to quantify eruption size. The I s values we estimated for inharmonic eruption tremor were consistent with previous estimates of volumes of tephra fallout. The scaling relations among source parameters that we identified will contribute to our understanding of the dynamic processes associated with eruptions and lahars. This new approach is applicable in analyzing tremor sources in real time and may contribute to early assessment of the size of eruptions and lahars.

  2. A Parameter Identification Method for Helicopter Noise Source Identification and Physics-Based Semi-Empirical Modeling

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric, II; Schmitz, Fredric H.

    2010-01-01

    A new physics-based parameter identification method for rotor harmonic noise sources is developed using an acoustic inverse simulation technique. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. This new method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor Blade-Vortex Interaction (BVI) noise, allowing accurate estimates of BVI noise to be made for operating conditions based on a small number of measurements taken at different operating conditions.

  3. Reducing bias in survival under non-random temporary emigration

    USGS Publications Warehouse

    Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann

    2014-01-01

    Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.

  4. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    NASA Astrophysics Data System (ADS)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)

  5. Chloramine demand estimation using surrogate chemical and microbiological parameters.

    PubMed

    Moradi, Sina; Liu, Sanly; Chow, Christopher W K; van Leeuwen, John; Cook, David; Drikas, Mary; Amal, Rose

    2017-07-01

    A model is developed to enable estimation of chloramine demand in full scale drinking water supplies based on chemical and microbiological factors that affect chloramine decay rate via nonlinear regression analysis method. The model is based on organic character (specific ultraviolet absorbance (SUVA)) of the water samples and a laboratory measure of the microbiological (F m ) decay of chloramine. The applicability of the model for estimation of chloramine residual (and hence chloramine demand) was tested on several waters from different water treatment plants in Australia through statistical test analysis between the experimental and predicted data. Results showed that the model was able to simulate and estimate chloramine demand at various times in real drinking water systems. To elucidate the loss of chloramine over the wide variation of water quality used in this study, the model incorporates both the fast and slow chloramine decay pathways. The significance of estimated fast and slow decay rate constants as the kinetic parameters of the model for three water sources in Australia was discussed. It was found that with the same water source, the kinetic parameters remain the same. This modelling approach has the potential to be used by water treatment operators as a decision support tool in order to manage chloramine disinfection. Copyright © 2017. Published by Elsevier B.V.

  6. A Bayesian methodological framework for accommodating interannual variability of nutrient loading with the SPARROW model

    NASA Astrophysics Data System (ADS)

    Wellen, Christopher; Arhonditsis, George B.; Labencki, Tanya; Boyd, Duncan

    2012-10-01

    Regression-type, hybrid empirical/process-based models (e.g., SPARROW, PolFlow) have assumed a prominent role in efforts to estimate the sources and transport of nutrient pollution at river basin scales. However, almost no attempts have been made to explicitly accommodate interannual nutrient loading variability in their structure, despite empirical and theoretical evidence indicating that the associated source/sink processes are quite variable at annual timescales. In this study, we present two methodological approaches to accommodate interannual variability with the Spatially Referenced Regressions on Watershed attributes (SPARROW) nonlinear regression model. The first strategy uses the SPARROW model to estimate a static baseline load and climatic variables (e.g., precipitation) to drive the interannual variability. The second approach allows the source/sink processes within the SPARROW model to vary at annual timescales using dynamic parameter estimation techniques akin to those used in dynamic linear models. Model parameterization is founded upon Bayesian inference techniques that explicitly consider calibration data and model uncertainty. Our case study is the Hamilton Harbor watershed, a mixed agricultural and urban residential area located at the western end of Lake Ontario, Canada. Our analysis suggests that dynamic parameter estimation is the more parsimonious of the two strategies tested and can offer insights into the temporal structural changes associated with watershed functioning. Consistent with empirical and theoretical work, model estimated annual in-stream attenuation rates varied inversely with annual discharge. Estimated phosphorus source areas were concentrated near the receiving water body during years of high in-stream attenuation and dispersed along the main stems of the streams during years of low attenuation, suggesting that nutrient source areas are subject to interannual variability.

  7. Seismicity and stress transfer studies in eastern California and Nevada: Implications for earthquake sources and tectonics

    NASA Astrophysics Data System (ADS)

    Ichinose, Gene Aaron

    The source parameters for eastern California and western Nevada earthquakes are estimated from regionally recorded seismograms using a moment tensor inversion. We use the point source approximation and fit the seismograms, at long periods. We generated a moment tensor catalog for Mw > 4.0 since 1997 and Mw > 5.0 since 1990. The catalog includes centroid depths, seismic moments, and focal mechanisms. The regions with the most moderate sized earthquakes in the last decade were in aftershock zones located in Eureka Valley, Double Spring Flat, Coso, Ridgecrest, Fish Lake Valley, and Scotty's Junction. The remaining moderate size earthquakes were distributed across the region. The 1993 (Mw 6.0) Eureka Valley earthquake occurred in the Eastern California Shear Zone. Careful aftershock relocations were used to resolve structure from aftershock clusters. The mainshock appears to rupture along the western side of the Last Change Range along a 30° to 60° west dipping fault plane, consistent with previous geodetic modeling. We estimate the source parameters for aftershocks at source-receiver distances less than 20 km using waveform modeling. The relocated aftershocks and waveform modeling results do not indicate any significant evidence of low angle faulting (dips > 30°. The results did reveal deformation along vertical faults within the hanging-wall block, consistent with observed surface rupture along the Saline Range above the dipping fault plane. The 1994 (Mw 5.8) Double Spring Flat earthquake occurred along the eastern Sierra Nevada between overlapping normal faults. Aftershock migration and cross fault triggering occurred in the following two years, producing seventeen Mw > 4 aftershocks The source parameters for the largest aftershocks were estimated from regionally recorded seismograms using moment tensor inversion. We estimate the source parameters for two moderate sized earthquakes which occurred near Reno, Nevada, the 1995 (Mw 4.4) Border Town, and the 1998 (Mw 4.7) Incline Village Earthquakes. We test to see how such stress interactions affected a cluster of six large earthquakes (Mw 6.6 to 7.5) between 1915 to 1954 within the Central Nevada Seismic Belt. We compute the static stress changes for these earthquake using dislocation models based on the location and amount of surface rupture. (Abstract shortened by UMI.)

  8. Estimating virus occurrence using Bayesian modeling in multiple drinking water systems of the United States

    USGS Publications Warehouse

    Varughese, Eunice A.; Brinkman, Nichole E; Anneken, Emily M; Cashdollar, Jennifer S; Fout, G. Shay; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.; Keely, Scott P

    2017-01-01

    incorporated into a Bayesian model to more accurately determine viral load in both source and treated water. Results of the Bayesian model indicated that viruses are present in source water and treated water. By using a Bayesian framework that incorporates inhibition, as well as many other parameters that affect viral detection, this study offers an approach for more accurately estimating the occurrence of viral pathogens in environmental waters.

  9. Approximate Bayesian estimation of extinction rate in the Finnish Daphnia magna metapopulation.

    PubMed

    Robinson, John D; Hall, David W; Wares, John P

    2013-05-01

    Approximate Bayesian computation (ABC) is useful for parameterizing complex models in population genetics. In this study, ABC was applied to simultaneously estimate parameter values for a model of metapopulation coalescence and test two alternatives to a strict metapopulation model in the well-studied network of Daphnia magna populations in Finland. The models shared four free parameters: the subpopulation genetic diversity (θS), the rate of gene flow among patches (4Nm), the founding population size (N0) and the metapopulation extinction rate (e) but differed in the distribution of extinction rates across habitat patches in the system. The three models had either a constant extinction rate in all populations (strict metapopulation), one population that was protected from local extinction (i.e. a persistent source), or habitat-specific extinction rates drawn from a distribution with specified mean and variance. Our model selection analysis favoured the model including a persistent source population over the two alternative models. Of the closest 750,000 data sets in Euclidean space, 78% were simulated under the persistent source model (estimated posterior probability = 0.769). This fraction increased to more than 85% when only the closest 150,000 data sets were considered (estimated posterior probability = 0.774). Approximate Bayesian computation was then used to estimate parameter values that might produce the observed set of summary statistics. Our analysis provided posterior distributions for e that included the point estimate obtained from previous data from the Finnish D. magna metapopulation. Our results support the use of ABC and population genetic data for testing the strict metapopulation model and parameterizing complex models of demography. © 2013 Blackwell Publishing Ltd.

  10. An Optimal Parameterization Framework for Infrasonic Tomography of the Stratospheric Winds Using Non-Local Sources

    DOE PAGES

    Blom, Philip Stephen; Marcillo, Omar Eduardo

    2016-12-05

    A method is developed to apply acoustic tomography methods to a localized network of infrasound arrays with intention of monitoring the atmosphere state in the region around the network using non-local sources without requiring knowledge of the precise source location or non-local atmosphere state. Closely spaced arrays provide a means to estimate phase velocities of signals that can provide limiting bounds on certain characteristics of the atmosphere. Larger spacing between such clusters provide a means to estimate celerity from propagation times along multiple unique stratospherically or thermospherically ducted propagation paths and compute more precise estimates of the atmosphere state. Inmore » order to avoid the commonly encountered complex, multimodal distributions for parametric atmosphere descriptions and to maximize the computational efficiency of the method, an optimal parametrization framework is constructed. This framework identifies the ideal combination of parameters for tomography studies in specific regions of the atmosphere and statistical model selection analysis shows that high quality corrections to the middle atmosphere winds can be obtained using as few as three parameters. Lastly, comparison of the resulting estimates for synthetic data sets shows qualitative agreement between the middle atmosphere winds and those estimated from infrasonic traveltime observations.« less

  11. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sprung, J.L.; Jow, H-N; Rollstin, J.A.

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric andmore » biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.« less

  12. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    PubMed

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dana L. Kelly

    Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition,more » substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vecchio, Alberto; Wickham, Elizabeth D.L.

    The Laser Interferometer Space Antenna (LISA) is expected to provide the largest observational sample of binary systems of faint subsolar mass compact objects, in particular, white-dwarfs, whose radiation is monochromatic over most of the LISA observational window. Current astrophysical estimates suggest that the instrument will be able to resolve {approx}10{sup 4} such systems, with a large fraction of them at frequencies > or approx. 3 mHz, where the wavelength of gravitational waves becomes comparable to or shorter than the LISA armlength. This affects the structure of the so-called LISA transfer function which cannot be treated as constant in this frequencymore » range: it introduces characteristic phase and amplitude modulations that depend on the source location in the sky and the emission frequency. Here we investigate the effect of the LISA transfer function on detection and parameter estimation for monochromatic sources. For signal detection we show that filters constructed by approximating the transfer function as a constant (long-wavelength approximation) introduce a negligible loss of signal-to-noise ratio--the fitting factor always exceeds 0.97--for f{<=}10 mHz, therefore in a frequency range where one would actually expect the approximation to fail. For parameter estimation, we conclude that in the range 3 mHz < or approx. f < or approx. 30 mHz the errors associated with parameter measurements differ from {approx_equal}5% up to a factor {approx}10 (depending on the actual source parameters and emission frequency) with respect to those computed using the long-wavelength approximation.« less

  16. Source levels and call parameters of harbor seal breeding vocalizations near a terrestrial haulout site in Glacier Bay National Park and Preserve.

    PubMed

    Matthews, Leanna P; Parks, Susan E; Fournet, Michelle E H; Gabriele, Christine M; Womble, Jamie N; Klinck, Holger

    2017-03-01

    Source levels of harbor seal breeding vocalizations were estimated using a three-element planar hydrophone array near the Beardslee Islands in Glacier Bay National Park and Preserve, Alaska. The average source level for these calls was 144 dB RMS re 1 μPa at 1 m in the 40-500 Hz frequency band. Source level estimates ranged from 129 to 149 dB RMS re 1 μPa. Four call parameters, including minimum frequency, peak frequency, total duration, and pulse duration, were also measured. These measurements indicated that breeding vocalizations of harbor seals near the Beardslee Islands of Glacier Bay National Park are similar in duration (average total duration: 4.8 s, average pulse duration: 3.0 s) to previously reported values from other populations, but are 170-220 Hz lower in average minimum frequency (78 Hz).

  17. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general, estimates of B and K are based on the initial P-wave pulse, which various numerical analyses show to be least affected by variations in near-source path effects. The corner-frequency parameter K is 20% lower at NTS (Pahute) than at other sites, implying larger effective source radii. The overshoot parameter B appears to be low at NTS (although variable) relative to other sites and is probably due to variations in source conditions. For a low B, the near-field data require a higher value of ψ ∞ to match the long-period MS and short-period mb observations. This flexibility in modeling proves useful in comparing released FSU yields against predictions based on mb and MS.

  18. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  19. Regularized Semiparametric Estimation for Ordinary Differential Equations

    PubMed Central

    Li, Yun; Zhu, Ji; Wang, Naisyin

    2015-01-01

    Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results. PMID:26392639

  20. Astrophysics to z approx. 10 with Gravitational Waves

    NASA Technical Reports Server (NTRS)

    Stebbins, Robin; Hughes, Scott; Lang, Ryan

    2007-01-01

    The most useful characterization of a gravitational wave detector's performance is the accuracy with which astrophysical parameters of potential gravitational wave sources can be estimated. One of the most important source types for the Laser Interferometer Space Antenna (LISA) is inspiraling binaries of black holes. LISA can measure mass and spin to better than 1% for a wide range of masses, even out to high redshifts. The most difficult parameter to estimate accurately is almost always luminosity distance. Nonetheless, LISA can measure luminosity distance of intermediate-mass black hole binary systems (total mass approx.10(exp 4) solar mass) out to z approx.10 with distance accuracies approaching 25% in many cases. With this performance, LISA will be able to follow the merger history of black holes from the earliest mergers of proto-galaxies to the present. LISA's performance as a function of mass from 1 to 10(exp 7) solar mass and of redshift out to z approx. 30 will be described. The re-formulation of LISA's science requirements based on an instrument sensitivity model and parameter estimation will be described.

  1. Estimating source parameters from deformation data, with an application to the March 1997 earthquake swarm off the Izu Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Cervelli, P.; Murray, M. H.; Segall, P.; Aoki, Y.; Kato, T.

    2001-06-01

    We have applied two Monte Carlo optimization techniques, simulated annealing and random cost, to the inversion of deformation data for fault and magma chamber geometry. These techniques involve an element of randomness that permits them to escape local minima and ultimately converge to the global minimum of misfit space. We have tested the Monte Carlo algorithms on two synthetic data sets. We have also compared them to one another in terms of their efficiency and reliability. We have applied the bootstrap method to estimate confidence intervals for the source parameters, including the correlations inherent in the data. Additionally, we present methods that use the information from the bootstrapping procedure to visualize the correlations between the different model parameters. We have applied these techniques to GPS, tilt, and leveling data from the March 1997 earthquake swarm off of the Izu Peninsula, Japan. Using the two Monte Carlo algorithms, we have inferred two sources, a dike and a fault, that fit the deformation data and the patterns of seismicity and that are consistent with the regional stress field.

  2. Earthquake Source Parameter Estimates for the Charlevoix and Western Quebec Seismic Zones in Eastern Canada

    NASA Astrophysics Data System (ADS)

    Onwuemeka, J.; Liu, Y.; Harrington, R. M.; Peña-Castro, A. F.; Rodriguez Padilla, A. M.; Darbyshire, F. A.

    2017-12-01

    The Charlevoix Seismic Zone (CSZ), located in eastern Canada, experiences a high rate of intraplate earthquakes, hosting more than six M >6 events since the 17th century. The seismicity rate is similarly high in the Western Quebec seismic zone (WQSZ) where an MN 5.2 event was reported on May 17, 2013. A good understanding of seismicity and its relation to the St-Lawrence paleorift system requires information about event source properties, such as static stress drop and fault orientation (via focal mechanism solutions). In this study, we conduct a systematic estimate of event source parameters using 1) hypoDD to relocate event hypocenters, 2) spectral analysis to derive corner frequency, magnitude, and hence static stress drops, and 3) first arrival polarities to derive focal mechanism solutions of selected events. We use a combined dataset for 817 earthquakes cataloged between June 2012 and May 2017 from the Canadian National Seismograph Network (CNSN), and temporary deployments from the QM-III Earthscope FlexArray and McGill seismic networks. We first relocate 450 events using P and S-wave differential travel-times refined with waveform cross-correlation, and compute focal mechanism solutions for all events with impulsive P-wave arrivals at a minimum of 8 stations using the hybridMT moment tensor inversion algorithm. We then determine corner frequency and seismic moment values by fitting S-wave spectra on transverse components at all stations for all events. We choose the final corner frequency and moment values for each event using the median estimate at all stations. We use the corner frequency and moment estimates to calculate moment magnitudes, static stress-drop values and rupture radii, assuming a circular rupture model. We also investigate scaling relationships between parameters, directivity, and compute apparent source dimensions and source time functions of 15 M 2.4+ events from second-degree moment estimates. To the first-order, source dimension estimates from both methods generally agree. We observe higher corner frequencies and higher stress drops (ranging from 20 to 70 MPa) typical of intraplate seismicity in comparison with interplate seismicity. We follow similar approaches to studying 25 MN 3+ events reported in the WQSZ using data recorded by the CNSN and USArray Transportable Array.

  3. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  4. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.

  5. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  6. Source positions from VLBI combined solution

    NASA Astrophysics Data System (ADS)

    Bachmann, S.; Thaller, D.; Engelhardt, G.

    2014-12-01

    The IVS Combination Center at BKG is primarily responsible for combined Earth Orientation Parameter (EOP) products and the generation of a terrestrial reference frame based on VLBI observations (VTRF). The procedure is based on the combination of normal equations provided by six IVS Analysis Centers (AC). Since more and more ACs also provide source positions in the normal equations - beside EOPs and station coordinates - an estimation of these parameters is possible and should be investigated. In the past, the International Celestial Reference Frame (ICRF) was not generated as a combined solution from several individual solutions, but was based on a single solution provided by one AC. The presentation will give an overview on the combination strategy and the possibilities for combined source position determination. This includes comparisons with existing catalogs, quality estimation and possibilities of rigorous combination of EOP, TRF and CRF in one combination process.

  7. Empirical Green's function analysis: Taking the next step

    USGS Publications Warehouse

    Hough, S.E.

    1997-01-01

    An extension of the empirical Green's function (EGF) method is presented that involves determination of source parameters using standard EGF deconvolution, followed by inversion for a common attenuation parameter for a set of colocated events. Recordings of three or more colocated events can thus be used to constrain a single path attenuation estimate. I apply this method to recordings from the 1995-1996 Ridgecrest, California, earthquake sequence; I analyze four clusters consisting of 13 total events with magnitudes between 2.6 and 4.9. I first obtain corner frequencies, which are used to infer Brune stress drop estimates. I obtain stress drop values of 0.3-53 MPa (with all but one between 0.3 and 11 MPa), with no resolved increase of stress drop with moment. With the corner frequencies constrained, the inferred attenuation parameters are very consistent; they imply an average shear wave quality factor of approximately 20-25 for alluvial sediments within the Indian Wells Valley. Although the resultant spectral fitting (using corner frequency and ??) is good, the residuals are consistent among the clusters analyzed. Their spectral shape is similar to the the theoretical one-dimensional response of a layered low-velocity structure in the valley (an absolute site response cannot be determined by this method, because of an ambiguity between absolute response and source spectral amplitudes). I show that even this subtle site response can significantly bias estimates of corner frequency and ??, if it is ignored in an inversion for only source and path effects. The multiple-EGF method presented in this paper is analogous to a joint inversion for source, path, and site effects; the use of colocated sets of earthquakes appears to offer significant advantages in improving resolution of all three estimates, especially if data are from a single site or sites with similar site response.

  8. AGN neutrino flux estimates for a realistic hybrid model

    NASA Astrophysics Data System (ADS)

    Richter, S.; Spanier, F.

    2018-07-01

    Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.

  9. Investigating the effects of the fixed and varying dispersion parameters of Poisson-gamma models on empirical Bayes estimates.

    PubMed

    Lord, Dominique; Park, Peter Young-Jin

    2008-07-01

    Traditionally, transportation safety analysts have used the empirical Bayes (EB) method to improve the estimate of the long-term mean of individual sites; to correct for the regression-to-the-mean (RTM) bias in before-after studies; and to identify hotspots or high risk locations. The EB method combines two different sources of information: (1) the expected number of crashes estimated via crash prediction models, and (2) the observed number of crashes at individual sites. Crash prediction models have traditionally been estimated using a negative binomial (NB) (or Poisson-gamma) modeling framework due to the over-dispersion commonly found in crash data. A weight factor is used to assign the relative influence of each source of information on the EB estimate. This factor is estimated using the mean and variance functions of the NB model. With recent trends that illustrated the dispersion parameter to be dependent upon the covariates of NB models, especially for traffic flow-only models, as well as varying as a function of different time-periods, there is a need to determine how these models may affect EB estimates. The objectives of this study are to examine how commonly used functional forms as well as fixed and time-varying dispersion parameters affect the EB estimates. To accomplish the study objectives, several traffic flow-only crash prediction models were estimated using a sample of rural three-legged intersections located in California. Two types of aggregated and time-specific models were produced: (1) the traditional NB model with a fixed dispersion parameter and (2) the generalized NB model (GNB) with a time-varying dispersion parameter, which is also dependent upon the covariates of the model. Several statistical methods were used to compare the fitting performance of the various functional forms. The results of the study show that the selection of the functional form of NB models has an important effect on EB estimates both in terms of estimated values, weight factors, and dispersion parameters. Time-specific models with a varying dispersion parameter provide better statistical performance in terms of goodness-of-fit (GOF) than aggregated multi-year models. Furthermore, the identification of hazardous sites, using the EB method, can be significantly affected when a GNB model with a time-varying dispersion parameter is used. Thus, erroneously selecting a functional form may lead to select the wrong sites for treatment. The study concludes that transportation safety analysts should not automatically use an existing functional form for modeling motor vehicle crashes without conducting rigorous analyses to estimate the most appropriate functional form linking crashes with traffic flow.

  10. Applicability of Broad-Band Photometry for Determining the Properties of Stars and Interstellar Extinction

    NASA Astrophysics Data System (ADS)

    Sichevskij, S. G.

    2018-01-01

    The feasibility of the determination of the physical conditions in star's atmosphere and the parameters of interstellar extinction from broad-band photometric observations in the 300-3000 nm wavelength interval is studied using SDSS and 2MASS data. The photometric accuracy of these surveys is shown to be insufficient for achieving in practice the theoretical possibility of estimating the atmospheric parameters of stars based on ugriz and JHK s photometry exclusively because such determinations result in correlations between the temperature and extinction estimates. The uncertainty of interstellar extinction estimates can be reduced if prior data about the temperature are available. The surveys considered can nevertheless be potentially valuable sources of information about both stellar atmospheric parameters and the interstellar medium.

  11. Multi-response calibration of a conceptual hydrological model in the semiarid catchment of Wadi al Arab, Jordan

    NASA Astrophysics Data System (ADS)

    Rödiger, T.; Geyer, S.; Mallast, U.; Merz, R.; Krause, P.; Fischer, C.; Siebert, C.

    2014-02-01

    A key factor for sustainable management of groundwater systems is the accurate estimation of groundwater recharge. Hydrological models are common tools for such estimations and widely used. As such models need to be calibrated against measured values, the absence of adequate data can be problematic. We present a nested multi-response calibration approach for a semi-distributed hydrological model in the semi-arid catchment of Wadi al Arab in Jordan, with sparsely available runoff data. The basic idea of the calibration approach is to use diverse observations in a nested strategy, in which sub-parts of the model are calibrated to various observation data types in a consecutive manner. First, the available different data sources have to be screened for information content of processes, e.g. if data sources contain information on mean values, spatial or temporal variability etc. for the entire catchment or only sub-catchments. In a second step, the information content has to be mapped to relevant model components, which represent these processes. Then the data source is used to calibrate the respective subset of model parameters, while the remaining model parameters remain unchanged. This mapping is repeated for other available data sources. In that study the gauged spring discharge (GSD) method, flash flood observations and data from the chloride mass balance (CMB) are used to derive plausible parameter ranges for the conceptual hydrological model J2000g. The water table fluctuation (WTF) method is used to validate the model. Results from modelling using a priori parameter values from literature as a benchmark are compared. The estimated recharge rates of the calibrated model deviate less than ±10% from the estimates derived from WTF method. Larger differences are visible in the years with high uncertainties in rainfall input data. The performance of the calibrated model during validation produces better results than applying the model with only a priori parameter values. The model with a priori parameter values from literature tends to overestimate recharge rates with up to 30%, particular in the wet winter of 1991/1992. An overestimation of groundwater recharge and hence available water resources clearly endangers reliable water resource managing in water scarce region. The proposed nested multi-response approach may help to better predict water resources despite data scarcity.

  12. Statistical models for incorporating data from routine HIV testing of pregnant women at antenatal clinics into HIV/AIDS epidemic estimates.

    PubMed

    Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B; Gregson, Simon; Eaton, Jeffrey W; Bao, Le

    2017-04-01

    HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women and can be used to improve estimates of national and subnational HIV prevalence trends. We develop methods to incorporate these new data source into the Joint United Nations Programme on AIDS Estimation and Projection Package in Spectrum 2017. We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (site-level) or regionally (census-level), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends and should be tested as more data become available from national ANC-RT programs.

  13. Statistical Models for Incorporating Data from Routine HIV Testing of Pregnant Women at Antenatal Clinics into HIV/AIDS Epidemic Estimates

    PubMed Central

    Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B.; Gregson, Simon; Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objective HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women, and can be used to improve estimates of national and sub-national HIV prevalence trends. We develop methods to incorporate this new data source into the UNAIDS Estimation and Projection Package (EPP) in Spectrum 2017. Methods We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (‘site-level’) or regionally (‘census-level’), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. Results We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. Conclusion We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends, and should be tested as more data become available from national ANC-RT programs. PMID:28296804

  14. Extending the Lincoln-Petersen estimator for multiple identifications in one source.

    PubMed

    Köse, T; Orman, M; Ikiz, F; Baksh, M F; Gallagher, J; Böhning, D

    2014-10-30

    The Lincoln-Petersen estimator is one of the most popular estimators used in capture-recapture studies. It was developed for a sampling situation in which two sources independently identify members of a target population. For each of the two sources, it is determined if a unit of the target population is identified or not. This leads to a 2 × 2 table with frequencies f11 ,f10 ,f01 ,f00 indicating the number of units identified by both sources, by the first but not the second source, by the second but not the first source and not identified by any of the two sources, respectively. However, f00 is unobserved so that the 2 × 2 table is incomplete and the Lincoln-Petersen estimator provides an estimate for f00 . In this paper, we consider a generalization of this situation for which one source provides not only a binary identification outcome but also a count outcome of how many times a unit has been identified. Using a truncated Poisson count model, truncating multiple identifications larger than two, we propose a maximum likelihood estimator of the Poisson parameter and, ultimately, of the population size. This estimator shows benefits, in comparison with Lincoln-Petersen's, in terms of bias and efficiency. It is possible to test the homogeneity assumption that is not testable in the Lincoln-Petersen framework. The approach is applied to surveillance data on syphilis from Izmir, Turkey. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Surface Craft Motion Parameter Estimation Using Multipath Delay Measurements from Hydrophones

    DTIC Science & Technology

    2011-12-01

    the sensor is cd . The slant range of the source from the sensor at time t is given by 21222 ])([)( cc RtvtR +−= τ ( 1 ) where 2122 ])[( crtc dhhR...Surface Craft Motion Parameter Estimation Using Multipath Delay Measurements from Hydrophones Kam W. Lo # 1 and Brian G. Ferguson #2 # Maritime...Eveleigh, NSW 2015 Australia 1 kam.lo@dsto.defence.gov.au 2 brian.ferguson@dsto.defence.gov.au Abstract— An equation-error (EE) method is

  16. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    PubMed

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  17. Directly comparing gravitational wave data to numerical relativity simulations: systematics

    NASA Astrophysics Data System (ADS)

    Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Zlochower, Yosef; Shoemaker, Deirdre; Lovelace, Geoffrey; Pankow, Christopher; Brady, Patrick; Scheel, Mark; Pfeiffer, Harald; Ossokine, Serguei

    2017-01-01

    We compare synthetic data directly to complete numerical relativity simulations of binary black holes. In doing so, we circumvent ad-hoc approximations introduced in semi-analytical models previously used in gravitational wave parameter estimation and compare the data against the most accurate waveforms including higher modes. In this talk, we focus on the synthetic studies that test potential sources of systematic errors. We also run ``end-to-end'' studies of intrinsically different synthetic sources to show we can recover parameters for different systems.

  18. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  19. libSRES: a C library for stochastic ranking evolution strategy for parameter estimation.

    PubMed

    Ji, Xinglai; Xu, Ying

    2006-01-01

    Estimation of kinetic parameters in a biochemical pathway or network represents a common problem in systems studies of biological processes. We have implemented a C library, named libSRES, to facilitate a fast implementation of computer software for study of non-linear biochemical pathways. This library implements a (mu, lambda)-ES evolutionary optimization algorithm that uses stochastic ranking as the constraint handling technique. Considering the amount of computing time it might require to solve a parameter-estimation problem, an MPI version of libSRES is provided for parallel implementation, as well as a simple user interface. libSRES is freely available and could be used directly in any C program as a library function. We have extensively tested the performance of libSRES on various pathway parameter-estimation problems and found its performance to be satisfactory. The source code (in C) is free for academic users at http://csbl.bmb.uga.edu/~jix/science/libSRES/

  20. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  1. GBIS (Geodetic Bayesian Inversion Software): Rapid Inversion of InSAR and GNSS Data to Estimate Surface Deformation Source Parameters and Uncertainties

    NASA Astrophysics Data System (ADS)

    Bagnardi, M.; Hooper, A. J.

    2017-12-01

    Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform slip, embedded in a isotropic elastic half-space. However, the software architecture allows the user to easily add any other analytical or numerical forward models to calculate displacements at the surface. GBIS is delivered with a detailed user manual and three synthetic datasets for testing and practical training.

  2. Estimating extinction using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Meingast, Stefan; Lombardi, Marco; Alves, João

    2017-05-01

    Dust extinction is the most robust tracer of the gas distribution in the interstellar medium, but measuring extinction is limited by the systematic uncertainties involved in estimating the intrinsic colors to background stars. In this paper we present a new technique, Pnicer, that estimates intrinsic colors and extinction for individual stars using unsupervised machine learning algorithms. This new method aims to be free from any priors with respect to the column density and intrinsic color distribution. It is applicable to any combination of parameters and works in arbitrary numbers of dimensions. Furthermore, it is not restricted to color space. Extinction toward single sources is determined by fitting Gaussian mixture models along the extinction vector to (extinction-free) control field observations. In this way it becomes possible to describe the extinction for observed sources with probability densities, rather than a single value. Pnicer effectively eliminates known biases found in similar methods and outperforms them in cases of deep observational data where the number of background galaxies is significant, or when a large number of parameters is used to break degeneracies in the intrinsic color distributions. This new method remains computationally competitive, making it possible to correctly de-redden millions of sources within a matter of seconds. With the ever-increasing number of large-scale high-sensitivity imaging surveys, Pnicer offers a fast and reliable way to efficiently calculate extinction for arbitrary parameter combinations without prior information on source characteristics. The Pnicer software package also offers access to the well-established Nicer technique in a simple unified interface and is capable of building extinction maps including the Nicest correction for cloud substructure. Pnicer is offered to the community as an open-source software solution and is entirely written in Python.

  3. Parameter estimation in a structural acoustic system with fully nonlinear coupling conditions

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Smith, Ralph C.

    1994-01-01

    A methodology for estimating physical parameters in a class of structural acoustic systems is presented. The general model under consideration consists of an interior cavity which is separated from an exterior noise source by an enclosing elastic structure. Piezoceramic patches are bonded to or embedded in the structure; these can be used both as actuators and sensors in applications ranging from the control of interior noise levels to the determination of structural flaws through nondestructive evaluation techniques. The presence and excitation of patches, however, changes the geometry and material properties of the structure as well as involves unknown patch parameters, thus necessitating the development of parameter estimation techniques which are applicable in this coupled setting. In developing a framework for approximation, parameter estimation and implementation, strong consideration is given to the fact that the input operator is unbonded due to the discrete nature of the patches. Moreover, the model is weakly nonlinear. As a result of the coupling mechanism between the structural vibrations and the interior acoustic dynamics. Within this context, an illustrating model is given, well-posedness and approximations results are discussed and an applicable parameter estimation methodology is presented. The scheme is then illustrated through several numerical examples with simulations modeling a variety of commonly used structural acoustic techniques for systems excitations and data collection.

  4. Quantifying model-structure- and parameter-driven uncertainties in spring wheat phenology prediction with Bayesian analysis

    DOE PAGES

    Alderman, Phillip D.; Stanfill, Bryan

    2016-10-06

    Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less

  5. Instantaneous and time-averaged dispersion and measurement models for estimation theory applications with elevated point source plumes

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1977-01-01

    Estimation theory, which originated in guidance and control research, is applied to the analysis of air quality measurements and atmospheric dispersion models to provide reliable area-wide air quality estimates. A method for low dimensional modeling (in terms of the estimation state vector) of the instantaneous and time-average pollutant distributions is discussed. In particular, the fluctuating plume model of Gifford (1959) is extended to provide an expression for the instantaneous concentration due to an elevated point source. Individual models are also developed for all parameters in the instantaneous and the time-average plume equations, including the stochastic properties of the instantaneous fluctuating plume.

  6. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-05-18

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method.

  7. Problems encountered with the use of simulation in an attempt to enhance interpretation of a secondary data source in epidemiologic mental health research

    PubMed Central

    2010-01-01

    Background The longitudinal epidemiology of major depressive episodes (MDE) is poorly characterized in most countries. Some potentially relevant data sources may be underutilized because they are not conducive to estimating the most salient epidemiologic parameters. An available data source in Canada provides estimates that are potentially valuable, but that are difficult to apply in clinical or public health practice. For example, weeks depressed in the past year is assessed in this data source whereas episode duration would be of more interest. The goal of this project was to derive, using simulation, more readily interpretable parameter values from the available data. Findings The data source was a Canadian longitudinal study called the National Population Health Survey (NPHS). A simulation model representing the course of depressive episodes was used to reshape estimates deriving from binary and ordinal logistic models (fit to the NPHS data) into equations more capable of informing clinical and public health decisions. Discrete event simulation was used for this purpose. Whereas the intention was to clarify a complex epidemiology, the models themselves needed to become excessively complex in order to provide an accurate description of the data. Conclusions Simulation methods are useful in circumstances where a representation of a real-world system has practical value. In this particular scenario, the usefulness of simulation was limited both by problems with the data source and by inherent complexity of the underlying epidemiology. PMID:20796271

  8. Cosmological evolution of supermassive black holes in the centres of galaxies

    NASA Astrophysics Data System (ADS)

    Kapinska, Anna D.

    2012-06-01

    Radio galaxies and quasars are among the largest and most powerful single objects known and are believed to have had a significant impact on the evolving Universe and its large scale structure. Their jets inject a significant amount of energy into the surrounding medium, hence they can provide useful information in the study of the density and evolution of the intergalactic and intracluster medium. The jet activity is also believed to regulate the growth of massive galaxies via the AGN feedback. In this thesis I explore the intrinsic and extrinsic physical properties of the population of Fanaroff-Riley II (FR II) objects, i.e. their kinetic luminosities, lifetimes, and central densities of their environments. In particular, the radio and kinetic luminosity functions of these powerful radio sources are investigated using the complete, flux limited radio catalogues of 3CRR and BRL. I construct multidimensional Monte Carlo simulations using semi-analytical models of FR II source time evolution to create artificial samples of radio galaxies. Unlike previous studies, I compare radio luminosity functions found with both the observed and simulated data to explore the best-fitting fundamental source parameters. The Monte Carlo method presented here allows one to: (i) set better limits on the predicted fundamental parameters of which confidence intervals estimated over broad ranges are presented, and (ii) generate the most plausible underlying parent populations of these radio sources. Moreover, I allow the source physical properties to co-evolve with redshift, and I find that all the investigated parameters most likely undergo cosmological evolution; however these parameters are strongly degenerate, and independent constraints are necessary to draw more precise conclusions. Furthermore, since it has been suggested that low luminosity FR IIs may be distinct from their powerful equivalents, I attempt to investigate fundamental properties of a sample of low redshift, low radio luminosity density radio galaxies. Based on SDSS-FIRST-NVSS radio sample I construct a low frequency (325 MHz) sample of radio galaxies and attempt to explore the fundamental properties of these low luminosity radio sources. The results are discussed through comparison with the results from the powerful radio sources of the 3CRR and BRL samples. Finally, I investigate the total power injected by populations of these powerful radio sources at various cosmological epochs and discuss the significance of the impact of these sources on the evolving Universe. Remarkably, sets of two degenerate fundamental parameters, the kinetic luminosity and maximum lifetimes of radio sources, despite the degeneracy provide particularly robust estimates of the total power produced by FR IIs during their lifetimes. This can be also used for robust estimations of the quenching of the cooling flows in cluster of galaxies.

  9. Estimation of Nutation Time Constant Model Parameters for On-Axis Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Schlee, Keith; Sudermann, James

    2008-01-01

    Calculating an accurate nutation time constant for a spinning spacecraft is an important step for ensuring mission success. Spacecraft nutation is caused by energy dissipation about the spin axis. Propellant slosh in the spacecraft fuel tanks is the primary source for this dissipation and can be simulated using a forced motion spin table. Mechanical analogs, such as pendulums and rotors, are typically used to simulate propellant slosh. A strong desire exists for an automated method to determine these analog parameters. The method presented accomplishes this task by using a MATLAB Simulink/SimMechanics based simulation that utilizes the Parameter Estimation Tool.

  10. An uncertainty analysis of the hydrogen source term for a station blackout accident in Sequoyah using MELCOR 1.8.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauntt, Randall O.; Bixler, Nathan E.; Wagner, Kenneth Charles

    2014-03-01

    A methodology for using the MELCOR code with the Latin Hypercube Sampling method was developed to estimate uncertainty in various predicted quantities such as hydrogen generation or release of fission products under severe accident conditions. In this case, the emphasis was on estimating the range of hydrogen sources in station blackout conditions in the Sequoyah Ice Condenser plant, taking into account uncertainties in the modeled physics known to affect hydrogen generation. The method uses user-specified likelihood distributions for uncertain model parameters, which may include uncertainties of a stochastic nature, to produce a collection of code calculations, or realizations, characterizing themore » range of possible outcomes. Forty MELCOR code realizations of Sequoyah were conducted that included 10 uncertain parameters, producing a range of in-vessel hydrogen quantities. The range of total hydrogen produced was approximately 583kg 131kg. Sensitivity analyses revealed expected trends with respected to the parameters of greatest importance, however, considerable scatter in results when plotted against any of the uncertain parameters was observed, with no parameter manifesting dominant effects on hydrogen generation. It is concluded that, with respect to the physics parameters investigated, in order to further reduce predicted hydrogen uncertainty, it would be necessary to reduce all physics parameter uncertainties similarly, bearing in mind that some parameters are inherently uncertain within a range. It is suspected that some residual uncertainty associated with modeling complex, coupled and synergistic phenomena, is an inherent aspect of complex systems and cannot be reduced to point value estimates. The probabilistic analyses such as the one demonstrated in this work are important to properly characterize response of complex systems such as severe accident progression in nuclear power plants.« less

  11. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    PubMed

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  12. Development and validation of a new population-based simulation model of osteoarthritis in New Zealand.

    PubMed

    Wilson, R; Abbott, J H

    2018-04-01

    To describe the construction and preliminary validation of a new population-based microsimulation model developed to analyse the health and economic burden and cost-effectiveness of treatments for knee osteoarthritis (OA) in New Zealand (NZ). We developed the New Zealand Management of Osteoarthritis (NZ-MOA) model, a discrete-time state-transition microsimulation model of the natural history of radiographic knee OA. In this article, we report on the model structure, derivation of input data, validation of baseline model parameters against external data sources, and validation of model outputs by comparison of the predicted population health loss with previous estimates. The NZ-MOA model simulates both the structural progression of radiographic knee OA and the stochastic development of multiple disease symptoms. Input parameters were sourced from NZ population-based data where possible, and from international sources where NZ-specific data were not available. The predicted distributions of structural OA severity and health utility detriments associated with OA were externally validated against other sources of evidence, and uncertainty resulting from key input parameters was quantified. The resulting lifetime and current population health-loss burden was consistent with estimates of previous studies. The new NZ-MOA model provides reliable estimates of the health loss associated with knee OA in the NZ population. The model structure is suitable for analysis of the effects of a range of potential treatments, and will be used in future work to evaluate the cost-effectiveness of recommended interventions within the NZ healthcare system. Copyright © 2018 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  13. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  14. Source Parameter Estimation using the Second-order Closure Integrated Puff Model

    DTIC Science & Technology

    The sensor measurements are categorized as triggered and non-triggered based on the recorded concentration measurements and a threshold...concentration value. Using each measured value, sources of adjoint material are created from the triggered and non-triggered sensors, and the adjoint transport...equations are solved to predict the adjoint concentration fields. The adjoint source strength is inversely proportional to the concentration measurement

  15. Quantitative evaluation of water quality in the coastal zone by remote sensing

    NASA Technical Reports Server (NTRS)

    James, W. P.

    1971-01-01

    Remote sensing as a tool in a waste management program is discussed. By monitoring both the pollution sources and the environmental quality, the interaction between the components of the exturaine system was observed. The need for in situ sampling is reduced with the development of improved calibrated, multichannel sensors. Remote sensing is used for: (1) pollution source determination, (2) mapping the influence zone of the waste source on water quality parameters, and (3) estimating the magnitude of the water quality parameters. Diffusion coefficients and circulation patterns can also be determined by remote sensing, along with subtle changes in vegetative patterns and density.

  16. An Improved Method for Seismic Event Depth and Moment Tensor Determination: CTBT Related Application

    NASA Astrophysics Data System (ADS)

    Stachnik, J.; Rozhkov, M.; Baker, B.

    2016-12-01

    According to the Protocol to CTBT, International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event. Determination of seismic event source mechanism and its depth is a part of these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. We show preliminary results using the latter approach from an improved software design and applied on a moderately powered computer. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK 2009, 2013 and 2016 events and shallow earthquakes using a new implementation of waveform fitting of teleseismic P waves. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Moment tensors for DPRK events show isotropic percentages greater than 50%. Depth estimates for the DPRK events range from 1.0-1.4 km. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  17. MODEST - JPL GEODETIC AND ASTROMETRIC VLBI MODELING AND PARAMETER ESTIMATION PROGRAM

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1994-01-01

    Observations of extragalactic radio sources in the gigahertz region of the radio frequency spectrum by two or more antennas, separated by a baseline as long as the diameter of the Earth, can be reduced, by radio interferometry techniques, to yield time delays and their rates of change. The Very Long Baseline Interferometric (VLBI) observables can be processed by the MODEST software to yield geodetic and astrometric parameters of interest in areas such as geophysical satellite and spacecraft tracking applications and geodynamics. As the accuracy of radio interferometry has improved, increasingly complete models of the delay and delay rate observables have been developed. MODEST is a delay model (MOD) and parameter estimation (EST) program that takes into account delay effects such as geometry, clock, troposphere, and the ionosphere. MODEST includes all known effects at the centimeter level in modeling. As the field evolves and new effects are discovered, these can be included in the model. In general, the model includes contributions to the observables from Earth orientation, antenna motion, clock behavior, atmospheric effects, and radio source structure. Within each of these categories, a number of unknown parameters may be estimated from the observations. Since all parts of the time delay model contain nearly linear parameter terms, a square-root-information filter (SRIF) linear least-squares algorithm is employed in parameter estimation. Flexibility (via dynamic memory allocation) in the MODEST code ensures that the same executable can process a wide array of problems. These range from a few hundred observations on a single baseline, yielding estimates of tens of parameters, to global solutions estimating tens of thousands of parameters from hundreds of thousands of observations at antennas widely distributed over the Earth's surface. Depending on memory and disk storage availability, large problems may be subdivided into more tractable pieces that are processed sequentially. MODEST is written in FORTRAN 77, C-language, and VAX ASSEMBLER for DEC VAX series computers running VMS. It requires 6Mb of RAM for execution. The standard distribution medium for this package is a 1600 BPI 9-track magnetic tape in DEC VAX BACKUP format. It is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Instructions for use and sample input and output data are available on the distribution media. This program was released in 1993 and is a copyrighted work with all copyright vested in NASA.

  18. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  19. Uncertainty Estimation in Elastic Full Waveform Inversion by Utilising the Hessian Matrix

    NASA Astrophysics Data System (ADS)

    Hagen, V. S.; Arntsen, B.; Raknes, E. B.

    2017-12-01

    Elastic Full Waveform Inversion (EFWI) is a computationally intensive iterative method for estimating elastic model parameters. A key element of EFWI is the numerical solution of the elastic wave equation which lies as a foundation to quantify the mismatch between synthetic (modelled) and true (real) measured seismic data. The misfit between the modelled and true receiver data is used to update the parameter model to yield a better fit between the modelled and true receiver signal. A common approach to the EFWI model update problem is to use a conjugate gradient search method. In this approach the resolution and cross-coupling for the estimated parameter update can be found by computing the full Hessian matrix. Resolution of the estimated model parameters depend on the chosen parametrisation, acquisition geometry, and temporal frequency range. Although some understanding has been gained, it is still not clear which elastic parameters can be reliably estimated under which conditions. With few exceptions, previous analyses have been based on arguments using radiation pattern analysis. We use the known adjoint-state technique with an expansion to compute the Hessian acting on a model perturbation to conduct our study. The Hessian is used to infer parameter resolution and cross-coupling for different selections of models, acquisition geometries, and data types, including streamer and ocean bottom seismic recordings. Information about the model uncertainty is obtained from the exact Hessian, and is essential when evaluating the quality of estimated parameters due to the strong influence of source-receiver geometry and frequency content. Investigation is done on both a homogeneous model and the Gullfaks model where we illustrate the influence of offset on parameter resolution and cross-coupling as a way of estimating uncertainty.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dreger, Douglas S.; Ford, Sean R.; Walter, William R.

    Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.

  1. Improving the realism of hydrologic model through multivariate parameter estimation

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10.1002/2016WR019430

  2. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  3. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  4. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system

    PubMed Central

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-01-01

    ABSTRACT In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods. PMID:28515537

  5. Estimating spatially distributed turbulent heat fluxes from high-resolution thermal imagery acquired with a UAV system.

    PubMed

    Brenner, Claire; Thiem, Christina Elisabeth; Wizemann, Hans-Dieter; Bernhardt, Matthias; Schulz, Karsten

    2017-05-19

    In this study, high-resolution thermal imagery acquired with a small unmanned aerial vehicle (UAV) is used to map evapotranspiration (ET) at a grassland site in Luxembourg. The land surface temperature (LST) information from the thermal imagery is the key input to a one-source and two-source energy balance model. While the one-source model treats the surface as a single uniform layer, the two-source model partitions the surface temperature and fluxes into soil and vegetation components. It thus explicitly accounts for the different contributions of both components to surface temperature as well as turbulent flux exchange with the atmosphere. Contrary to the two-source model, the one-source model requires an empirical adjustment parameter in order to account for the effect of the two components. Turbulent heat flux estimates of both modelling approaches are compared to eddy covariance (EC) measurements using the high-resolution input imagery UAVs provide. In this comparison, the effect of different methods for energy balance closure of the EC data on the agreement between modelled and measured fluxes is also analysed. Additionally, the sensitivity of the one-source model to the derivation of the empirical adjustment parameter is tested. Due to the very dry and hot conditions during the experiment, pronounced thermal patterns developed over the grassland site. These patterns result in spatially variable turbulent heat fluxes. The model comparison indicates that both models are able to derive ET estimates that compare well with EC measurements under these conditions. However, the two-source model, with a more complex treatment of the energy and surface temperature partitioning between the soil and vegetation, outperformed the simpler one-source model in estimating sensible and latent heat fluxes. This is consistent with findings from prior studies. For the one-source model, a time-variant expression of the adjustment parameter (to account for the difference between aerodynamic and radiometric temperature) that depends on the surface-to-air temperature gradient yielded the best agreement with EC measurements. This study showed that the applied UAV system equipped with a dual-camera set-up allows for the acquisition of thermal imagery with high spatial and temporal resolution that illustrates the small-scale heterogeneity of thermal surface properties. The UAV-based thermal imagery therefore provides the means for analysing patterns of LST and other surface properties with a high level of detail that cannot be obtained by traditional remote sensing methods.

  6. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  7. Hierarchical Bayesian Model Averaging for Non-Uniqueness and Uncertainty Analysis of Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Fijani, E.; Chitsazan, N.; Nadiri, A.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    Artificial Neural Networks (ANNs) have been widely used to estimate concentration of chemicals in groundwater systems. However, estimation uncertainty is rarely discussed in the literature. Uncertainty in ANN output stems from three sources: ANN inputs, ANN parameters (weights and biases), and ANN structures. Uncertainty in ANN inputs may come from input data selection and/or input data error. ANN parameters are naturally uncertain because they are maximum-likelihood estimated. ANN structure is also uncertain because there is no unique ANN model given a specific case. Therefore, multiple plausible AI models are generally resulted for a study. One might ask why good models have to be ignored in favor of the best model in traditional estimation. What is the ANN estimation variance? How do the variances from different ANN models accumulate to the total estimation variance? To answer these questions we propose a Hierarchical Bayesian Model Averaging (HBMA) framework. Instead of choosing one ANN model (the best ANN model) for estimation, HBMA averages outputs of all plausible ANN models. The model weights are based on the evidence of data. Therefore, the HBMA avoids overconfidence on the single best ANN model. In addition, HBMA is able to analyze uncertainty propagation through aggregation of ANN models in a hierarchy framework. This method is applied for estimation of fluoride concentration in the Poldasht plain and the Bazargan plain in Iran. Unusually high fluoride concentration in the Poldasht and Bazargan plains has caused negative effects on the public health. Management of this anomaly requires estimation of fluoride concentration distribution in the area. The results show that the HBMA provides a knowledge-decision-based framework that facilitates analyzing and quantifying ANN estimation uncertainties from different sources. In addition HBMA allows comparative evaluation of the realizations for each source of uncertainty by segregating the uncertainty sources in a hierarchical framework. Fluoride concentration estimation using the HBMA method shows better agreement to the observation data in the test step because they are not based on a single model with a non-dominate weights.

  8. Bayesian inference on EMRI signals using low frequency approximations

    NASA Astrophysics Data System (ADS)

    Ali, Asad; Christensen, Nelson; Meyer, Renate; Röver, Christian

    2012-07-01

    Extreme mass ratio inspirals (EMRIs) are thought to be one of the most exciting gravitational wave sources to be detected with LISA. Due to their complicated nature and weak amplitudes the detection and parameter estimation of such sources is a challenging task. In this paper we present a statistical methodology based on Bayesian inference in which the estimation of parameters is carried out by advanced Markov chain Monte Carlo (MCMC) algorithms such as parallel tempering MCMC. We analysed high and medium mass EMRI systems that fall well inside the low frequency range of LISA. In the context of the Mock LISA Data Challenges, our investigation and results are also the first instance in which a fully Markovian algorithm is applied for EMRI searches. Results show that our algorithm worked well in recovering EMRI signals from different (simulated) LISA data sets having single and multiple EMRI sources and holds great promise for posterior computation under more realistic conditions. The search and estimation methods presented in this paper are general in their nature, and can be applied in any other scenario such as AdLIGO, AdVIRGO and Einstein Telescope with their respective response functions.

  9. Industrial Demand Module - NEMS Documentation

    EIA Publications

    2014-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Module. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. Uncertainty of inhalation dose coefficients for representative physical and chemical forms of iodine-131

    NASA Astrophysics Data System (ADS)

    Harvey, Richard Paul, III

    Releases of radioactive material have occurred at various Department of Energy (DOE) weapons facilities and facilities associated with the nuclear fuel cycle in the generation of electricity. Many different radionuclides have been released to the environment with resulting exposure of the population to these various sources of radioactivity. Radioiodine has been released from a number of these facilities and is a potential public health concern due to its physical and biological characteristics. Iodine exists as various isotopes, but our focus is on 131I due to its relatively long half-life, its prevalence in atmospheric releases and its contribution to offsite dose. The assumption of physical and chemical form is speculated to have a profound impact on the deposition of radioactive material within the respiratory tract. In the case of iodine, it has been shown that more than one type of physical and chemical form may be released to, or exist in, the environment; iodine can exist as a particle or as a gas. The gaseous species can be further segregated based on chemical form: elemental, inorganic, and organic iodides. Chemical compounds in each class are assumed to behave similarly with respect to biochemistry. Studies at Oak Ridge National Laboratories have demonstrated that 131I is released as a particulate, as well as in elemental, inorganic and organic chemical form. The internal dose estimate from 131I may be very different depending on the effect that chemical form has on fractional deposition, gas uptake, and clearance in the respiratory tract. There are many sources of uncertainty in the estimation of environmental dose including source term, airborne transport of radionuclides, and internal dosimetry. Knowledge of uncertainty in internal dosimetry is essential for estimating dose to members of the public and for determining total uncertainty in dose estimation. Important calculational steps in any lung model is regional estimation of deposition fractions and gas uptake of radionuclides in various regions of the lung. Variability in regional radionuclide deposition within lung compartments may significantly contribute to the overall uncertainty of the lung model. The uncertainty of lung deposition and biological clearance is dependent upon physiological and anatomical parameters of individuals as well as characteristic parameters of the particulate material. These parameters introduce uncertainty into internal dose estimates due to their inherent variability. Anatomical and physiological input parameters are age and gender dependent. This work has determined the uncertainty in internal dose estimates and the sensitive parameters involved in modeling particulate deposition and gas uptake of different physical and chemical forms of 131I with age and gender dependencies.

  11. Parameter Estimation for Compact Binaries with Ground-Based Gravitational-Wave Observations Using the LALInference

    NASA Technical Reports Server (NTRS)

    Veitch, J.; Raymond, V.; Farr, B.; Farr, W.; Graff, P.; Vitale, S.; Aylott, B.; Blackburn, K.; Christensen, N.; Coughlin, M.

    2015-01-01

    The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star (BNS), a neutron star - black hole binary (NSBH) and a binary black hole (BBH), where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence (CBC) parameter space.

  12. A fast and robust method for moment tensor and depth determination of shallow seismic events in CTBT related studies.

    NASA Astrophysics Data System (ADS)

    Baker, Ben; Stachnik, Joshua; Rozhkov, Mikhail

    2017-04-01

    International Data Center is required to conduct expert technical analysis and special studies to improve event parameters and assist State Parties in identifying the source of specific event according to the protocol to the Protocol to the Comprehensive Nuclear Test Ban Treaty. Determination of seismic event source mechanism and its depth is closely related to these tasks. It is typically done through a strategic linearized inversion of the waveforms for a complete or subset of source parameters, or similarly defined grid search through precomputed Greens Functions created for particular source models. In this presentation we demonstrate preliminary results obtained with the latter approach from an improved software design. In this development we tried to be compliant with different modes of CTBT monitoring regime and cover wide range of source-receiver distances (regional to teleseismic), resolve shallow source depths, provide full moment tensor solution based on body and surface waves recordings, be fast to satisfy both on-demand studies and automatic processing and properly incorporate observed waveforms and any uncertainties a priori as well as accurately estimate posteriori uncertainties. Posterior distributions of moment tensor parameters show narrow peaks where a significant number of reliable surface wave observations are available. For earthquake examples, fault orientation (strike, dip, and rake) posterior distributions also provide results consistent with published catalogues. Inclusion of observations on horizontal components will provide further constraints. In addition, the calculation of teleseismic P wave Green's Functions are improved through prior analysis to determine an appropriate attenuation parameter for each source-receiver path. Implemented HDF5 based Green's Functions pre-packaging allows much greater flexibility in utilizing different software packages and methods for computation. Further additions will have the rapid use of Instaseis/AXISEM full waveform synthetics added to a pre-computed GF archive. Along with traditional post processing analysis of waveform misfits through several objective functions and variance reduction, we follow a probabilistic approach to assess the robustness of moment tensor solution. In a course of this project full moment tensor and depth estimates are determined for DPRK events and shallow earthquakes using a new implementation of teleseismic P waves waveform fitting. A full grid search over the entire moment tensor space is used to appropriately sample all possible solutions. A recent method by Tape & Tape (2012) to discretize the complete moment tensor space from a geometric perspective is used. Probabilistic uncertainty estimates on the moment tensor parameters provide robustness to solution.

  13. Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS

    USGS Publications Warehouse

    Bolker, Benjamin M.; Gardner, Beth; Maunder, Mark; Berg, Casper W.; Brooks, Mollie; Comita, Liza; Crone, Elizabeth; Cubaynes, Sarah; Davies, Trevor; de Valpine, Perry; Ford, Jessica; Gimenez, Olivier; Kéry, Marc; Kim, Eun Jung; Lennert-Cody, Cleridy; Magunsson, Arni; Martell, Steve; Nash, John; Nielson, Anders; Regentz, Jim; Skaug, Hans; Zipkin, Elise

    2013-01-01

    1. Ecologists often use nonlinear fitting techniques to estimate the parameters of complex ecological models, with attendant frustration. This paper compares three open-source model fitting tools and discusses general strategies for defining and fitting models. 2. R is convenient and (relatively) easy to learn, AD Model Builder is fast and robust but comes with a steep learning curve, while BUGS provides the greatest flexibility at the price of speed. 3. Our model-fitting suggestions range from general cultural advice (where possible, use the tools and models that are most common in your subfield) to specific suggestions about how to change the mathematical description of models to make them more amenable to parameter estimation. 4. A companion web site (https://groups.nceas.ucsb.edu/nonlinear-modeling/projects) presents detailed examples of application of the three tools to a variety of typical ecological estimation problems; each example links both to a detailed project report and to full source code and data.

  14. Comparison of Measured to Predicted Estimations of Nonpoint Source Contaminants Using Conservation Practices in an Agriculturally-Dominated Watershed in Northeast Arkansas, USA.

    PubMed

    Frasher, Sarah K; Woodruff, Tracy M; Bouldin, Jennifer L

    2016-06-01

    In efforts to reduce nonpoint source runoff and improve water quality, Best Management Practices (BMPs) were implemented in the Outlet Larkin Creek Watershed. Farmers need to make scientifically informed decisions concerning BMPs addressing contaminants from agricultural fields. The BMP Tool was developed from previous studies to estimate BMP effectiveness at reducing nonpoint source contaminants. The purpose of this study was to compare the measured percent reduction of dissolved phosphorus (DP) and total suspended solids to the reported percent reductions from the BMP Tool for validation. Similarities were measured between the BMP Tool and the measured water quality parameters. Construction of a sedimentation pond resulted in 74 %-76 % reduction in DP as compared to 80 % as predicted with the BMP Tool. However, further research is needed to validate the tool for additional water quality parameters. The BMP Tool is recommended for future BMP implementation as a useful predictor for farmers.

  15. Developing population models with data from marked individuals

    USGS Publications Warehouse

    Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,

    2016-01-01

    Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.

  16. Critical bounds on noise and SNR for robust estimation of real-time brain activity from functional near infra-red spectroscopy.

    PubMed

    Aqil, Muhammad; Jeong, Myung Yung

    2018-04-24

    The robust characterization of real-time brain activity carries potential for many applications. However, the contamination of measured signals by various instrumental, environmental, and physiological sources of noise introduces a substantial amount of signal variance and, consequently, challenges real-time estimation of contributions from underlying neuronal sources. Functional near infra-red spectroscopy (fNIRS) is an emerging imaging modality whose real-time potential is yet to be fully explored. The objectives of the current study are to (i) validate a time-dependent linear model of hemodynamic responses in fNIRS, and (ii) test the robustness of this approach against measurement noise (instrumental and physiological) and mis-specification of the hemodynamic response basis functions (amplitude, latency, and duration). We propose a linear hemodynamic model with time-varying parameters, which are estimated (adapted and tracked) using a dynamic recursive least square algorithm. Owing to the linear nature of the activation model, the problem of achieving robust convergence to an accurate estimation of the model parameters is recast as a problem of parameter error stability around the origin. We show that robust convergence of the proposed method is guaranteed in the presence of an acceptable degree of model misspecification and we derive an upper bound on noise under which reliable parameters can still be inferred. We also derived a lower bound on signal-to-noise-ratio over which the reliable parameters can still be inferred from a channel/voxel. Whilst here applied to fNIRS, the proposed methodology is applicable to other hemodynamic-based imaging technologies such as functional magnetic resonance imaging. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Regional Earthquake Shaking and Loss Estimation

    NASA Astrophysics Data System (ADS)

    Sesetyan, K.; Demircioglu, M. B.; Zulfikar, C.; Durukal, E.; Erdik, M.

    2009-04-01

    This study, conducted under the JRA-3 component of the EU NERIES Project, develops a methodology and software (ELER) for the rapid estimation of earthquake shaking and losses in the Euro-Mediterranean region. This multi-level methodology developed together with researchers from Imperial College, NORSAR and ETH-Zurich is capable of incorporating regional variability and sources of uncertainty stemming from ground motion predictions, fault finiteness, site modifications, inventory of physical and social elements subjected to earthquake hazard and the associated vulnerability relationships. GRM Risk Management, Inc. of Istanbul serves as sub-contractor tor the coding of the ELER software. The methodology encompasses the following general steps: 1. Finding of the most likely location of the source of the earthquake using regional seismotectonic data base and basic source parameters, and if and when possible, by the estimation of fault rupture parameters from rapid inversion of data from on-line stations. 2. Estimation of the spatial distribution of selected ground motion parameters through region specific ground motion attenuation relationships and using shear wave velocity distributions.(Shake Mapping) 4. Incorporation of strong ground motion and other empirical macroseismic data for the improvement of Shake Map 5. Estimation of the losses (damage, casualty and economic) at different levels of sophistication (0, 1 and 2) that commensurate with the availability of inventory of human built environment (Loss Mapping) Both Level 0 (similar to PAGER system of USGS) and Level 1 analyses of the ELER routine are based on obtaining intensity distributions analytically and estimating total number of casualties and their geographic distribution either using regionally adjusted intensity-casualty or magnitude-casualty correlations (Level 0) of using regional building inventory data bases (Level 1). Level 0 analysis is similar to the PAGER system being developed by USGS. For given basis source parameters the intensity distributions can be computed using: a)Regional intensity attenuation relationships, b)Intensity correlations with attenuation relationship based PGV, PGA, and Spectral Amplitudes and, c)Intensity correlations with synthetic Fourier Amplitude Spectrum. In Level 1 analysis EMS98 based building vulnerability relationships are used for regional estimates of building damage and the casualty distributions. Results obtained from pilot applications of the Level 0 and Level 1 analysis modes of the ELER software to the 1999 M 7.4 Kocaeli, 1995 M 6.1 Dinar, and 2007 M 5.4 Bingol earthquakes in terms of ground shaking and losses are presented and comparisons with the observed losses are made. The regional earthquake shaking and loss information is intented for dissemination in a timely manner to related agencies for the planning and coordination of the post-earthquake emergency response. However the same software can also be used for scenario earthquake loss estimation and related Monte-Carlo type simulations.

  18. Earthquake source parameters determined using the SAFOD Pilot Hole vertical seismic array

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Ellsworth, W. L.; Prejean, S. G.

    2003-12-01

    We determined source parameters of microearthquakes occurring at Parkfield, CA, using the SAFOD Pilot Hole vertical seismic array. The array consists of 32 stations with 3-component 15 Hz geophones at 40 meter spacing (856 to 2096 m depth) The site is about 1.8 km southwest of a segment of the San Andreas fault characterized by a combination of aseismic creep and repeating microearthquakes. We analyzed seismograms recorded at sample rates of 1kHz or 2kHz. Spectra have high signal-to-noise ratios at frequencies up to 300-400 Hz, showing these data include information on source processes of microearthquakes. By comparing spectra and waveforms at different levels of the array, we observe how attenuation and scattering in the shallow crust affect high-frequency waves. We estimated spectral level (Ω 0), corner frequency (fc) and path-averaged attenuation (Q) at each level of the array by fitting an omega squared model to displacement spectra. While the spectral level changes smoothly with depth, there is significant scatter in fc and Q due to the strong trade-off between these parameters. Because we expect source parameters to vary systematically with depth, we impose a smoothness constraint on Q, Ω 0 and fc as a function of depth. For some of the nearby events, take-off angles to the different levels of the array span a significant part of the focal sphere. Therefore corner frequencies should also change with depth. We smooth measurements using a linear first-difference operator that links Q, Ω 0 and fc at one level to the levels above and below, and use Akaike_fs Bayesian Information Criterion (ABIC) to weight the smoothing operators. We applied this approach to events with high signal-to-noise ratios. For the results with the minimum ABIC, fc does not scatter and Q decreases with decreasing depth. Seismic moments were determined by the spectral level and range from 109 and 1012 Nm. Source radii were estimated from the corner frequency using the circular crack model of Sato and Hirasawa (1973). Estimated values of static stress drop were roughly 1 MPa and do not vary with seismic moment. Q values from all earthquakes were averaged at each level of the array. Average Qp and Qs range from 250 to 350 and from 300 to 400 between the top and bottom of the array, respectively. Increasing Q values as a function of depth explain well the observed decrease in high-frequency content as waves propagate toward the surface. Thus, by jointly analyzing the entire vertical array we can both accurately determine source parameters of microearthquakes and make reliable Q estimates while suppressing the trade-off between fc and Q.

  19. An improved method to estimate reflectance parameters for high dynamic range imaging

    NASA Astrophysics Data System (ADS)

    Li, Shiying; Deguchi, Koichiro; Li, Renfa; Manabe, Yoshitsugu; Chihara, Kunihiro

    2008-01-01

    Two methods are described to accurately estimate diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness, over the dynamic range of the camera used to capture input images. Neither method needs to segment color areas on an image, or to reconstruct a high dynamic range (HDR) image. The second method improves on the first, bypassing the requirement for specific separation of diffuse and specular reflection components. For the latter method, diffuse and specular reflectance parameters are estimated separately, using the least squares method. Reflection values are initially assumed to be diffuse-only reflection components, and are subjected to the least squares method to estimate diffuse reflectance parameters. Specular reflection components, obtained by subtracting the computed diffuse reflection components from reflection values, are then subjected to a logarithmically transformed equation of the Torrance-Sparrow reflection model, and specular reflectance parameters for gloss intensity and surface roughness are finally estimated using the least squares method. Experiments were carried out using both methods, with simulation data at different saturation levels, generated according to the Lambert and Torrance-Sparrow reflection models, and the second method, with spectral images captured by an imaging spectrograph and a moving light source. Our results show that the second method can estimate the diffuse and specular reflectance parameters for colors, gloss intensity and surface roughness more accurately and faster than the first one, so that colors and gloss can be reproduced more efficiently for HDR imaging.

  20. Filter design for the detection of compact sources based on the Neyman-Pearson detector

    NASA Astrophysics Data System (ADS)

    López-Caniego, M.; Herranz, D.; Barreiro, R. B.; Sanz, J. L.

    2005-05-01

    This paper considers the problem of compact source detection on a Gaussian background. We present a one-dimensional treatment (though a generalization to two or more dimensions is possible). Two relevant aspects of this problem are considered: the design of the detector and the filtering of the data. Our detection scheme is based on local maxima and it takes into account not only the amplitude but also the curvature of the maxima. A Neyman-Pearson test is used to define the region of acceptance, which is given by a sufficient linear detector that is independent of the amplitude distribution of the sources. We study how detection can be enhanced by means of linear filters with a scaling parameter, and compare some filters that have been proposed in the literature [the Mexican hat wavelet, the matched filter (MF) and the scale-adaptive filter (SAF)]. We also introduce a new filter, which depends on two free parameters (the biparametric scale-adaptive filter, BSAF). The value of these two parameters can be determined, given the a priori probability density function of the amplitudes of the sources, such that the filter optimizes the performance of the detector in the sense that it gives the maximum number of real detections once it has fixed the number density of spurious sources. The new filter includes as particular cases the standard MF and the SAF. As a result of its design, the BSAF outperforms these filters. The combination of a detection scheme that includes information on the curvature and a flexible filter that incorporates two free parameters (one of them a scaling parameter) improves significantly the number of detections in some interesting cases. In particular, for the case of weak sources embedded in white noise, the improvement with respect to the standard MF is of the order of 40 per cent. Finally, an estimation of the amplitude of the source (most probable value) is introduced and it is proven that such an estimator is unbiased and has maximum efficiency. We perform numerical simulations to test these theoretical ideas in a practical example and conclude that the results of the simulations agree with the analytical results.

  1. Predicting dense nonaqueous phase liquid dissolution using a simplified source depletion model parameterized with partitioning tracers

    NASA Astrophysics Data System (ADS)

    Basu, Nandita B.; Fure, Adrian D.; Jawitz, James W.

    2008-07-01

    Simulations of nonpartitioning and partitioning tracer tests were used to parameterize the equilibrium stream tube model (ESM) that predicts the dissolution dynamics of dense nonaqueous phase liquids (DNAPLs) as a function of the Lagrangian properties of DNAPL source zones. Lagrangian, or stream-tube-based, approaches characterize source zones with as few as two trajectory-integrated parameters, in contrast to the potentially thousands of parameters required to describe the point-by-point variability in permeability and DNAPL in traditional Eulerian modeling approaches. The spill and subsequent dissolution of DNAPLs were simulated in two-dimensional domains having different hydrologic characteristics (variance of the log conductivity field = 0.2, 1, and 3) using the multiphase flow and transport simulator UTCHEM. Nonpartitioning and partitioning tracers were used to characterize the Lagrangian properties (travel time and trajectory-integrated DNAPL content statistics) of DNAPL source zones, which were in turn shown to be sufficient for accurate prediction of source dissolution behavior using the ESM throughout the relatively broad range of hydraulic conductivity variances tested here. The results were found to be relatively insensitive to travel time variability, suggesting that dissolution could be accurately predicted even if the travel time variance was only coarsely estimated. Estimation of the ESM parameters was also demonstrated using an approximate technique based on Eulerian data in the absence of tracer data; however, determining the minimum amount of such data required remains for future work. Finally, the stream tube model was shown to be a more unique predictor of dissolution behavior than approaches based on the ganglia-to-pool model for source zone characterization.

  2. Constructing Ebola transmission chains from West Africa and estimating model parameters using internet sources.

    PubMed

    Pettey, W B P; Carter, M E; Toth, D J A; Samore, M H; Gundlapalli, A V

    2017-07-01

    During the recent Ebola crisis in West Africa, individual person-level details of disease onset, transmissions, and outcomes such as survival or death were reported in online news media. We set out to document disease transmission chains for Ebola, with the goal of generating a timely account that could be used for surveillance, mathematical modeling, and public health decision-making. By accessing public web pages only, such as locally produced newspapers and blogs, we created a transmission chain involving two Ebola clusters in West Africa that compared favorably with other published transmission chains, and derived parameters for a mathematical model of Ebola disease transmission that were not statistically different from those derived from published sources. We present a protocol for responsibly gleaning epidemiological facts, transmission model parameters, and useful details from affected communities using mostly indigenously produced sources. After comparing our transmission parameters to published parameters, we discuss additional benefits of our method, such as gaining practical information about the affected community, its infrastructure, politics, and culture. We also briefly compare our method to similar efforts that used mostly non-indigenous online sources to generate epidemiological information.

  3. Estimation of Saxophone Control Parameters by Convex Optimization.

    PubMed

    Wang, Cheng-I; Smyth, Tamara; Lipton, Zachary C

    2014-12-01

    In this work, an approach to jointly estimating the tone hole configuration (fingering) and reed model parameters of a saxophone is presented. The problem isn't one of merely estimating pitch as one applied fingering can be used to produce several different pitches by bugling or overblowing. Nor can a fingering be estimated solely by the spectral envelope of the produced sound (as it might for estimation of vocal tract shape in speech) since one fingering can produce markedly different spectral envelopes depending on the player's embouchure and control of the reed. The problem is therefore addressed by jointly estimating both the reed (source) parameters and the fingering (filter) of a saxophone model using convex optimization and 1) a bank of filter frequency responses derived from measurement of the saxophone configured with all possible fingerings and 2) sample recordings of notes produced using all possible fingerings, played with different overblowing, dynamics and timbre. The saxophone model couples one of several possible frequency response pairs (corresponding to the applied fingering), and a quasi-static reed model generating input pressure at the mouthpiece, with control parameters being blowing pressure and reed stiffness. Applied fingering and reed parameters are estimated for a given recording by formalizing a minimization problem, where the cost function is the error between the recording and the synthesized sound produced by the model having incremental parameter values for blowing pressure and reed stiffness. The minimization problem is nonlinear and not differentiable and is made solvable using convex optimization. The performance of the fingering identification is evaluated with better accuracy than previous reported value.

  4. International Natural Gas Model 2011, Model Documentation Report

    EIA Publications

    2013-01-01

    This report documents the objectives, analytical approach and development of the International Natural Gas Model (INGM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  5. Estimating the Uncertainty In Diameter Growth Model Predictions and Its Effects On The Uncertainty of Annual Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    2001-01-01

    Uncertainty in diameter growth predictions is attributed to three general sources: measurement error or sampling variability in predictor variables, parameter covariances, and residual or unexplained variation around model expectations. Using measurement error and sampling variability distributions obtained from the literature and Monte Carlo simulation methods, the...

  6. New Theory for Tsunami Propagation and Estimation of Tsunami Source Parameters

    NASA Astrophysics Data System (ADS)

    Mindlin, I. M.

    2007-12-01

    In numerical studies based on the shallow water equations for tsunami propagation, vertical accelerations and velocities within the sea water are neglected, so a tsunami is usually supposed to be produced by an initial free surface displacement in the initially still sea. In the present work, new theory for tsunami propagation across the deep sea is discussed, that accounts for the vertical accelerations and velocities. The theory is based on the solutions for the water surface displacement obtained in [Mindlin I.M. Integrodifferential equations in dynamics of a heavy layered liquid. Moscow: Nauka*Fizmatlit, 1996 (Russian)]. The solutions are valid when horizontal dimensions of the initially disturbed area in the sea surface are much larger than the vertical displacement of the surface, which applies to the earthquake tsunamis. It is shown that any tsunami is a combination of specific basic waves found analytically (not superposition: the waves are nonlinear), and consequently, the tsunami source (i.e., the initially disturbed body of water) can be described by the numerable set of the parameters involved in the combination. Thus the problem of theoretical reconstruction of a tsunami source is reduced to the problem of estimation of the parameters. The tsunami source can be modelled approximately with the use of a finite number of the parameters. Two-parametric model is discussed thoroughly. A method is developed for estimation of the model's parameters using the arrival times of the tsunami at certain locations, the maximum wave-heights obtained from tide gauge records at the locations, and the distances between the earthquake's epicentre and each of the locations. In order to evaluate the practical use of the theory, four tsunamis of different magnitude occurred in Japan are considered. For each of the tsunamis, the tsunami energy (E below), the duration of the tsunami source formation T, the maximum water elevation in the wave originating area H, mean radius of the area R, and the average magnitude of the sea surface displacement at the margin of the wave originating area h are estimated using tide gauges records. The results are compared (and, in the author's opinion, are in line) with the estimates known in the literature. Compared to the methods employed in the literature, there is no need to use bathymetry (and, consequently, refraction diagrams) for the estimations. The present paper follows closely earlier works [Mindlin I.M., 1996; Mindlin I.M. J. Appl. Math. Phys. (ZAMP), 2004, vol.55, pp. 781-799] and adds to their theoretical results. Example. The Hiuganada earthquake of 1968, April, 1, 9h 42m JST. A tsunami of moderate size arrived at the coast of the south-western part of Shikoku and the eastern part of Kyushu, Japan. Tsunami parameters listed above are estimated with the theory being discussed for two models of tsunami generation: (a) by initial free surface displacement (the case for numerical studies): E=1.91· 1012J, R=22km, h=17.2cm; and (b) by a sudden change in the velocity field of initially still water: E=8.78· 1012J, R=20.4km, h=9.2cm. These values are in line with known estimates [Soloviev S.L., Go Ch.N. Catalogue of tsunami in the West of Pacific Ocean. Moscow, 1974]: E=1.3· 1013J (attributed to Hatori), E=(1.4 - 2.2)· 1012J (attributed to Aida), R=21.2km, h=20cm [Hatory T., Bull. Earthq. Res. Inst., Tokyo Univ., 1969, vol. 47, pp. 55-63]. Also, estimates are obtained for the values that could not be found based on shallow water wave theory: (a) H=3.43m and (b) H=1.38m, T=16.4sec.

  7. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  8. Automated Land Cover Change Detection and Mapping from Hidden Parameter Estimates of Normalized Difference Vegetation Index (NDVI) Time-Series

    NASA Astrophysics Data System (ADS)

    Chakraborty, S.; Banerjee, A.; Gupta, S. K. S.; Christensen, P. R.; Papandreou-Suppappola, A.

    2017-12-01

    Multitemporal observations acquired frequently by satellites with short revisit periods such as the Moderate Resolution Imaging Spectroradiometer (MODIS), is an important source for modeling land cover. Due to the inherent seasonality of the land cover, harmonic modeling reveals hidden state parameters characteristic to it, which is used in classifying different land cover types and in detecting changes due to natural or anthropogenic factors. In this work, we use an eight day MODIS composite to create a Normalized Difference Vegetation Index (NDVI) time-series of ten years. Improved hidden parameter estimates of the nonlinear harmonic NDVI model are obtained using the Particle Filter (PF), a sequential Monte Carlo estimator. The nonlinear estimation based on PF is shown to improve parameter estimation for different land cover types compared to existing techniques that use the Extended Kalman Filter (EKF), due to linearization of the harmonic model. As these parameters are representative of a given land cover, its applicability in near real-time detection of land cover change is also studied by formulating a metric that captures parameter deviation due to change. The detection methodology is evaluated by considering change as a rare class problem. This approach is shown to detect change with minimum delay. Additionally, the degree of change within the change perimeter is non-uniform. By clustering the deviation in parameters due to change, this spatial variation in change severity is effectively mapped and validated with high spatial resolution change maps of the given regions.

  9. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  10. Fully probabilistic earthquake source inversion on teleseismic scales

    NASA Astrophysics Data System (ADS)

    Stähler, Simon; Sigloch, Karin

    2017-04-01

    Seismic source inversion is a non-linear problem in seismology where not just the earthquake parameters but also estimates of their uncertainties are of great practical importance. We have developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. These unknowns are parameterised efficiently by harnessing as prior knowledge solutions from a large number of non-Bayesian inversions. The source time function is expressed as a weighted sum of a small number of empirical orthogonal functions, which were derived from a catalogue of >1000 source time functions (STFs) by a principal component analysis. We use a likelihood model based on the cross-correlation misfit between observed and predicted waveforms. The resulting ensemble of solutions provides full uncertainty and covariance information for the source parameters, and permits propagating these source uncertainties into travel time estimates used for seismic tomography. The computational effort is such that routine, global estimation of earthquake mechanisms and source time functions from teleseismic broadband waveforms is feasible. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. References: Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 1: Efficient parameterisation, Solid Earth, 5, 1055-1069, doi:10.5194/se-5-1055-2014, 2014. Stähler, S. C. and Sigloch, K.: Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances, Solid Earth, 7, 1521-1536, doi:10.5194/se-7-1521-2016, 2016.

  11. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  12. Source parameters of the 2013 Lushan, Sichuan, Ms7.0 earthquake and estimation of the near-fault strong ground motion

    NASA Astrophysics Data System (ADS)

    Meng, L.; Zhou, L.; Liu, J.

    2013-12-01

    Abstract: The April 20, 2013 Ms 7.0 earthquake in Lushan city, Sichuan province of China occurred as the result of east-west oriented reverse-type motion on a north-south striking fault. The source location suggests the event occurred on the Southern part of Longmenshan fault at a depth of 13km. The Lushan earthquake caused a great of loss of property and 196 deaths. The maximum intensity is up to VIII to IX at Boxing and Lushan city, which are located in the meizoseismal area. In this study, we analyzed the dynamic source process and calculated source spectral parameters, estimated the strong ground motion in the near-fault field based on the Brune's circle model at first. A dynamical composite source model (DCSM) has been developed further to simulate the near-fault strong ground motion with associated fault rupture properties at Boxing and Lushan city, respectively. The results indicate that the frictional undershoot behavior in the dynamic source process of Lushan earthquake, which is actually different from the overshoot activity of the Wenchuan earthquake. Based on the simulated results of the near-fault strong ground motion, described the intensity distribution of the Lushan earthquake field. The simulated intensity indicated that, the maximum intensity value is IX, and region with and above VII almost 16,000km2, which is consistence with observation intensity published online by China Earthquake Administration (CEA) on April 25. Moreover, the numerical modeling developed in this study has great application in the strong ground motion prediction and intensity estimation for the earthquake rescue purpose. In fact, the estimation methods based on the empirical relationship and numerical modeling developed in this study has great application in the strong ground motion prediction for the earthquake source process understand purpose. Keywords: Lushan, Ms7.0 earthquake; near-fault strong ground motion; DCSM; simulated intensity

  13. Source parameters controlling the generation and propagation of potential local tsunamis along the cascadia margin

    USGS Publications Warehouse

    Geist, E.; Yoshioka, S.

    1996-01-01

    The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.

  14. Probabilistic estimation of splitting coefficients of normal modes of the Earth, and their uncertainties, using an autoregressive technique

    NASA Astrophysics Data System (ADS)

    Pachhai, S.; Masters, G.; Laske, G.

    2017-12-01

    Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.

  15. Source depth dependence of micro-tsunamis recorded with ocean-bottom pressure gauges: The January 28, 2000 Mw 6.8 earthquake off Nemuro Peninsula, Japan

    USGS Publications Warehouse

    Hirata, K.; Takahashi, H.; Geist, E.; Satake, K.; Tanioka, Y.; Sugioka, H.; Mikada, H.

    2003-01-01

    Micro-tsunami waves with a maximum amplitude of 4-6 mm were detected with the ocean-bottom pressure gauges on a cabled deep seafloor observatory south of Hokkaido, Japan, following the January 28, 2000 earthquake (Mw 6.8) in the southern Kuril subduction zone. We model the observed micro-tsunami and estimate the focal depth and other source parameters such as fault length and amount of slip using grid searching with the least-squares method. The source depth and stress drop for the January 2000 earthquake are estimated to be 50 km and 7 MPa, respectively, with possible ranges of 45-55 km and 4-13 MPa. Focal depth of typical inter-plate earthquakes in this region ranges from 10 to 20 km and stress drop of inter-plate earthquakes generally is around 3 MPa. The source depth and stress drop estimates suggest that the earthquake was an intra-slab event in the subducting Pacific plate, rather than an inter-plate event. In addition, for a prescribed fault width of 30 km, the fault length is estimated to be 15 km, with possible ranges of 10-20 km, which is the same as the previously determined aftershock distribution. The corresponding estimate for seismic moment is 2.7x1019 Nm with possible ranges of 2.3x1019-3.2x1019Nm. Standard tide gauges along the nearby coast did not record any tsunami signal. High-precision ocean-bottom pressure measurements offshore thus make it possible to determine fault parameters of moderate-sized earthquakes in subduction zones using open-ocean tsunami waveforms. Published by Elsevier Science B. V.

  16. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    NASA Astrophysics Data System (ADS)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  17. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  18. Parameter Estimation for a Model of Space-Time Rainfall

    NASA Astrophysics Data System (ADS)

    Smith, James A.; Karr, Alan F.

    1985-08-01

    In this paper, parameter estimation procedures, based on data from a network of rainfall gages, are developed for a class of space-time rainfall models. The models, which are designed to represent the spatial distribution of daily rainfall, have three components, one that governs the temporal occurrence of storms, a second that distributes rain cells spatially for a given storm, and a third that determines the rainfall pattern within a rain cell. Maximum likelihood and method of moments procedures are developed. We illustrate that limitations on model structure are imposed by restricting data sources to rain gage networks. The estimation procedures are applied to a 240-mi2 (621 km2) catchment in the Potomac River basin.

  19. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance.

    PubMed

    Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S

    2017-10-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.

  20. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Source parameters derived from seismic spectrum in the Jalisco block

    NASA Astrophysics Data System (ADS)

    Gutierrez, Q. J.; Escudero, C. R.; Nunez-Cornu, F. J.

    2012-12-01

    The direct measure of the earthquake fault dimension represent a complicated task nevertheless a better approach is using the seismic waves spectrum. With this method we can estimate the dimensions of the fault, the stress drop and the seismic moment. The study area comprises the complex tectonic configuration of Jalisco block and the subduction of the Rivera plate beneath the North American plate; this causes that occur in Jalisco some of the most harmful earthquakes and other related natural disasters. Accordingly it is important to monitor and perform studies that helps to understand the physics of earthquake rupture mechanism in the area. The main proposue of this study is estimate earthquake seismic source parameters. The data was recorded by the MARS network (Mapping the Riviera Subduction Zone) and the RESAJ network. MARS had 51 stations and settled in the Jalisco block; that is delimited by the mesoamerican trench at the west, the Colima grabben to the south, and the Tepic-Zacoalco to the north; for a period of time, of January 1, 2006 until December 31, 2007 Of this network was taken 104 events, the magnitude range of these was between 3 to 6.5 MB. RESJAL has 10 stations and is within the state of Jalisco, began to record since October 2011 and continues to record. We firs remove the trend, the mean and the instrument response, then manually chosen the S wave, then the multitaper method was used to obtain the spectrum of this wave and so estimate the corner frequency and the spectra level. We substitude the obtained in the equations of the Brune model to calculate the source parameters. Doing this we obtained the following results; the source radius was between .1 to 2 km, the stress drop was between .1 to 2 MPa.

  2. Parameter estimation for the exponential-normal convolution model for background correction of affymetrix GeneChip data.

    PubMed

    McGee, Monnie; Chen, Zhongxue

    2006-01-01

    There are many methods of correcting microarray data for non-biological sources of error. Authors routinely supply software or code so that interested analysts can implement their methods. Even with a thorough reading of associated references, it is not always clear how requisite parts of the method are calculated in the software packages. However, it is important to have an understanding of such details, as this understanding is necessary for proper use of the output, or for implementing extensions to the model. In this paper, the calculation of parameter estimates used in Robust Multichip Average (RMA), a popular preprocessing algorithm for Affymetrix GeneChip brand microarrays, is elucidated. The background correction method for RMA assumes that the perfect match (PM) intensities observed result from a convolution of the true signal, assumed to be exponentially distributed, and a background noise component, assumed to have a normal distribution. A conditional expectation is calculated to estimate signal. Estimates of the mean and variance of the normal distribution and the rate parameter of the exponential distribution are needed to calculate this expectation. Simulation studies show that the current estimates are flawed; therefore, new ones are suggested. We examine the performance of preprocessing under the exponential-normal convolution model using several different methods to estimate the parameters.

  3. Bayesian Modeling of Exposure and Airflow Using Two-Zone Models

    PubMed Central

    Zhang, Yufen; Banerjee, Sudipto; Yang, Rui; Lungu, Claudiu; Ramachandran, Gurumurthy

    2009-01-01

    Mathematical modeling is being increasingly used as a means for assessing occupational exposures. However, predicting exposure in real settings is constrained by lack of quantitative knowledge of exposure determinants. Validation of models in occupational settings is, therefore, a challenge. Not only do the model parameters need to be known, the models also need to predict the output with some degree of accuracy. In this paper, a Bayesian statistical framework is used for estimating model parameters and exposure concentrations for a two-zone model. The model predicts concentrations in a zone near the source and far away from the source as functions of the toluene generation rate, air ventilation rate through the chamber, and the airflow between near and far fields. The framework combines prior or expert information on the physical model along with the observed data. The framework is applied to simulated data as well as data obtained from the experiments conducted in a chamber. Toluene vapors are generated from a source under different conditions of airflow direction, the presence of a mannequin, and simulated body heat of the mannequin. The Bayesian framework accounts for uncertainty in measurement as well as in the unknown rate of airflow between the near and far fields. The results show that estimates of the interzonal airflow are always close to the estimated equilibrium solutions, which implies that the method works efficiently. The predictions of near-field concentration for both the simulated and real data show nice concordance with the true values, indicating that the two-zone model assumptions agree with the reality to a large extent and the model is suitable for predicting the contaminant concentration. Comparison of the estimated model and its margin of error with the experimental data thus enables validation of the physical model assumptions. The approach illustrates how exposure models and information on model parameters together with the knowledge of uncertainty and variability in these quantities can be used to not only provide better estimates of model outputs but also model parameters. PMID:19403840

  4. Two-Component Structure of the Radio Source 0014+813 from VLBI Observations within the CONT14 Program

    NASA Astrophysics Data System (ADS)

    Titov, O. A.; Lopez, Yu. R.

    2018-03-01

    We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.

  5. A Model for Remote Depth Estimation of Buried Radioactive Wastes Using CdZnTe Detector

    PubMed Central

    2018-01-01

    This paper presents the results of an attenuation model for remote depth estimation of buried radioactive wastes using a Cadmium Zinc Telluride (CZT) detector. Previous research using an organic liquid scintillator detector system showed that the model is able to estimate the depth of a 329-kBq Cs-137 radioactive source buried up to 12 cm in sand with an average count rate of 100 cps. The results presented in this paper showed that the use of the CZT detector extended the maximum detectable depth of the same radioactive source to 18 cm in sand with a significantly lower average count rate of 14 cps. Furthermore, the model also successfully estimated the depth of a 9-kBq Co-60 source buried up to 3 cm in sand. This confirms that this remote depth estimation method can be used with other radionuclides and wastes with very low activity. Finally, the paper proposes a performance parameter for evaluating radiation detection systems that implement this remote depth estimation method. PMID:29783644

  6. Improving the precision of lake ecosystem metabolism estimates by identifying predictors of model uncertainty

    USGS Publications Warehouse

    Rose, Kevin C.; Winslow, Luke A.; Read, Jordan S.; Read, Emily K.; Solomon, Christopher T.; Adrian, Rita; Hanson, Paul C.

    2014-01-01

    Diel changes in dissolved oxygen are often used to estimate gross primary production (GPP) and ecosystem respiration (ER) in aquatic ecosystems. Despite the widespread use of this approach to understand ecosystem metabolism, we are only beginning to understand the degree and underlying causes of uncertainty for metabolism model parameter estimates. Here, we present a novel approach to improve the precision and accuracy of ecosystem metabolism estimates by identifying physical metrics that indicate when metabolism estimates are highly uncertain. Using datasets from seventeen instrumented GLEON (Global Lake Ecological Observatory Network) lakes, we discovered that many physical characteristics correlated with uncertainty, including PAR (photosynthetically active radiation, 400-700 nm), daily variance in Schmidt stability, and wind speed. Low PAR was a consistent predictor of high variance in GPP model parameters, but also corresponded with low ER model parameter variance. We identified a threshold (30% of clear sky PAR) below which GPP parameter variance increased rapidly and was significantly greater in nearly all lakes compared with variance on days with PAR levels above this threshold. The relationship between daily variance in Schmidt stability and GPP model parameter variance depended on trophic status, whereas daily variance in Schmidt stability was consistently positively related to ER model parameter variance. Wind speeds in the range of ~0.8-3 m s–1 were consistent predictors of high variance for both GPP and ER model parameters, with greater uncertainty in eutrophic lakes. Our findings can be used to reduce ecosystem metabolism model parameter uncertainty and identify potential sources of that uncertainty.

  7. Extreme longevity in freshwater mussels revisited: sources of bias in age estimates derived from mark-recapture experiments

    Treesearch

    Wendell R. Haag

    2009-01-01

    There may be bias associated with mark–recapture experiments used to estimate age and growth of freshwater mussels. Using subsets of a mark–recapture dataset for Quadrula pustulosa, I examined how age and growth parameter estimates are affected by (i) the range and skew of the data and (ii) growth reduction due to handling. I compared predictions...

  8. Urban air quality estimation study, phase 1

    NASA Technical Reports Server (NTRS)

    Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.

    1976-01-01

    Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.

  9. Refinement of Regional Distance Seismic Moment Tensor and Uncertainty Analysis for Source-Type Identification

    DTIC Science & Technology

    2011-09-01

    a NSS that lies in this negative explosion positive CLVD quadrant due to the large degree of tectonic release in this event that reversed the phase...Mellman (1986) in their analysis of fundamental model Love and Rayleigh wave amplitude and phase for nuclear and tectonic release source terms, and...1986). Estimating explosion and tectonic release source parameters of underground nuclear explosions from Rayleigh and Love wave observations, Air

  10. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  11. Influence of conservative corrections on parameter estimation for extreme-mass-ratio inspirals

    NASA Astrophysics Data System (ADS)

    Huerta, E. A.; Gair, Jonathan R.

    2009-04-01

    We present an improved numerical kludge waveform model for circular, equatorial extreme-mass-ratio inspirals (EMRIs). The model is based on true Kerr geodesics, augmented by radiative self-force corrections derived from perturbative calculations, and in this paper for the first time we include conservative self-force corrections that we derive by comparison to post-Newtonian results. We present results of a Monte Carlo simulation of parameter estimation errors computed using the Fisher matrix and also assess the theoretical errors that would arise from omitting the conservative correction terms we include here. We present results for three different types of system, namely, the inspirals of black holes, neutron stars, or white dwarfs into a supermassive black hole (SMBH). The analysis shows that for a typical source (a 10M⊙ compact object captured by a 106M⊙ SMBH at a signal to noise ratio of 30) we expect to determine the two masses to within a fractional error of ˜10-4, measure the spin parameter q to ˜10-4.5, and determine the location of the source on the sky and the spin orientation to within 10-3 steradians. We show that, for this kludge model, omitting the conservative corrections leads to a small error over much of the parameter space, i.e., the ratio R of the theoretical model error to the Fisher matrix error is R<1 for all ten parameters in the model. For the few systems with larger errors typically R<3 and hence the conservative corrections can be marginally ignored. In addition, we use our model and first-order self-force results for Schwarzschild black holes to estimate the error that arises from omitting the second-order radiative piece of the self-force. This indicates that it may not be necessary to go beyond first order to recover accurate parameter estimates.

  12. Residential Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Model Documentation - Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Residential Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, and FORTRAN source code.

  13. Mapping Curie temperature depth in the western United States with a fractal model for crustal magnetization

    USGS Publications Warehouse

    Bouligand, C.; Glen, J.M.G.; Blakely, R.J.

    2009-01-01

    We have revisited the problem of mapping depth to the Curie temperature isotherm from magnetic anomalies in an attempt to provide a measure of crustal temperatures in the western United States. Such methods are based on the estimation of the depth to the bottom of magnetic sources, which is assumed to correspond to the temperature at which rocks lose their spontaneous magnetization. In this study, we test and apply a method based on the spectral analysis of magnetic anomalies. Early spectral analysis methods assumed that crustal magnetization is a completely uncorrelated function of position. Our method incorporates a more realistic representation where magnetization has a fractal distribution defined by three independent parameters: the depths to the top and bottom of magnetic sources and a fractal parameter related to the geology. The predictions of this model are compatible with radial power spectra obtained from aeromagnetic data in the western United States. Model parameters are mapped by estimating their value within a sliding window swept over the study area. The method works well on synthetic data sets when one of the three parameters is specified in advance. The application of this method to western United States magnetic compilations, assuming a constant fractal parameter, allowed us to detect robust long-wavelength variations in the depth to the bottom of magnetic sources. Depending on the geologic and geophysical context, these features may result from variations in depth to the Curie temperature isotherm, depth to the mantle, depth to the base of volcanic rocks, or geologic settings that affect the value of the fractal parameter. Depth to the bottom of magnetic sources shows several features correlated with prominent heat flow anomalies. It also shows some features absent in the map of heat flow. Independent geophysical and geologic data sets are examined to determine their origin, thereby providing new insights on the thermal and geologic crustal structure of the western United States.

  14. Robust estimation procedure in panel data model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah

    2014-06-19

    The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less

  15. Sensitivity of drainage morphometry based hydrological response (GIUH) of a river basin to the spatial resolution of DEM data

    NASA Astrophysics Data System (ADS)

    Sahoo, Ramendra; Jain, Vikrant

    2018-02-01

    Drainage network pattern and its associated morphometric ratios are some of the important plan form attributes of a drainage basin. Extraction of these attributes for any basin is usually done by spatial analysis of the elevation data of that basin. These planform attributes are further used as input data for studying numerous process-response interactions inside the physical premise of the basin. One of the important uses of the morphometric ratios is its usage in the derivation of hydrologic response of a basin using GIUH concept. Hence, accuracy of the basin hydrological response to any storm event depends upon the accuracy with which, the morphometric ratios can be estimated. This in turn, is affected by the spatial resolution of the source data, i.e. the digital elevation model (DEM). We have estimated the sensitivity of the morphometric ratios and the GIUH derived hydrograph parameters, to the resolution of source data using a 30 meter and a 90 meter DEM. The analysis has been carried out for 50 drainage basins in a mountainous catchment. A simple and comprehensive algorithm has been developed for estimation of the morphometric indices from a stream network. We have calculated all the morphometric parameters and the hydrograph parameters for each of these basins extracted from two different DEMs, with different spatial resolutions. Paired t-test and Sign test were used for the comparison. Our results didn't show any statistically significant difference among any of the parameters calculated from the two source data. Along with the comparative study, a first-hand empirical analysis about the frequency distribution of the morphometric and hydrologic response parameters has also been communicated. Further, a comparison with other hydrological models suggests that plan form morphometry based GIUH model is more consistent with resolution variability in comparison to topographic based hydrological model.

  16. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  17. An adaptive Bayesian inference algorithm to estimate the parameters of a hazardous atmospheric release

    NASA Astrophysics Data System (ADS)

    Rajaona, Harizo; Septier, François; Armand, Patrick; Delignon, Yves; Olry, Christophe; Albergel, Armand; Moussafir, Jacques

    2015-12-01

    In the eventuality of an accidental or intentional atmospheric release, the reconstruction of the source term using measurements from a set of sensors is an important and challenging inverse problem. A rapid and accurate estimation of the source allows faster and more efficient action for first-response teams, in addition to providing better damage assessment. This paper presents a Bayesian probabilistic approach to estimate the location and the temporal emission profile of a pointwise source. The release rate is evaluated analytically by using a Gaussian assumption on its prior distribution, and is enhanced with a positivity constraint to improve the estimation. The source location is obtained by the means of an advanced iterative Monte-Carlo technique called Adaptive Multiple Importance Sampling (AMIS), which uses a recycling process at each iteration to accelerate its convergence. The proposed methodology is tested using synthetic and real concentration data in the framework of the Fusion Field Trials 2007 (FFT-07) experiment. The quality of the obtained results is comparable to those coming from the Markov Chain Monte Carlo (MCMC) algorithm, a popular Bayesian method used for source estimation. Moreover, the adaptive processing of the AMIS provides a better sampling efficiency by reusing all the generated samples.

  18. Analytical magmatic source modelling from a joint inversion of ground deformation and focal mechanisms data

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Scandura, Danila; Palano, Mimmo; Musumeci, Carla

    2014-05-01

    Seismicity and ground deformation represent the principal geophysical methods for volcano monitoring and provide important constraints on subsurface magma movements. The occurrence of migrating seismic swarms, as observed at several volcanoes worldwide, are commonly associated with dike intrusions. In addition, on active volcanoes, (de)pressurization and/or intrusion of magmatic bodies stress and deform the surrounding crustal rocks, often causing earthquakes randomly distributed in time within a volume extending about 5-10 km from the wall of the magmatic bodies. Despite advances in space-based, geodetic and seismic networks have significantly improved volcano monitoring in the last decades on an increasing worldwide number of volcanoes, quantitative models relating deformation and seismicity are not common. The observation of several episodes of volcanic unrest throughout the world, where the movement of magma through the shallow crust was able to produce local rotation of the ambient stress field, introduces an opportunity to improve the estimate of the parameters of a deformation source. In particular, during these episodes of volcanic unrest a radial pattern of P-axes of the focal mechanism solutions, similar to that of ground deformation, has been observed. Therefore, taking into account additional information from focal mechanisms data, we propose a novel approach to volcanic source modeling based on the joint inversion of deformation and focal plane solutions assuming that both observations are due to the same source. The methodology is first verified against a synthetic dataset of surface deformation and strain within the medium, and then applied to real data from an unrest episode occurred before the May 13th 2008 eruption at Mt. Etna (Italy). The main results clearly indicate as the joint inversion improves the accuracy of the estimated source parameters of about 70%. The statistical tests indicate that the source depth is the parameter with the highest increment of accuracy. In addition a sensitivity analysis confirms that displacements data are more useful to constrain the pressure and the horizontal location of the source than its depth, while the P-axes better constrain the depth estimation.

  19. Identification of sensitive parameters in the modeling of SVOC reemission processes from soil to atmosphere.

    PubMed

    Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc

    2014-09-15

    Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Estimation of gross land-use change and its uncertainty using a Bayesian data assimilation approach

    NASA Astrophysics Data System (ADS)

    Levy, Peter; van Oijen, Marcel; Buys, Gwen; Tomlinson, Sam

    2018-03-01

    We present a method for estimating land-use change using a Bayesian data assimilation approach. The approach provides a general framework for combining multiple disparate data sources with a simple model. This allows us to constrain estimates of gross land-use change with reliable national-scale census data, whilst retaining the detailed information available from several other sources. Eight different data sources, with three different data structures, were combined in our posterior estimate of land use and land-use change, and other data sources could easily be added in future. The tendency for observations to underestimate gross land-use change is accounted for by allowing for a skewed distribution in the likelihood function. The data structure produced has high temporal and spatial resolution, and is appropriate for dynamic process-based modelling. Uncertainty is propagated appropriately into the output, so we have a full posterior distribution of output and parameters. The data are available in the widely used netCDF file format from http://eidc.ceh.ac.uk/.

  1. In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2012-12-01

    Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.

  2. The structure of the ISM in the Zone of Avoidance by high-resolution multi-wavelength observations

    NASA Astrophysics Data System (ADS)

    Tóth, L. V.; Doi, Y.; Pinter, S.; Kovács, T.; Zahorecz, S.; Bagoly, Z.; Balázs, L. G.; Horvath, I.; Racz, I. I.; Onishi, T.

    2018-05-01

    We estimate the column density of the Galactic foreground interstellar medium (GFISM) in the direction of extragalactic sources. All-sky AKARI FIS infrared sky survey data might be used to trace the GFISM with a resolution of 2 arcminutes. The AKARI based GFISM hydrogen column density estimates are compared with similar quantities based on HI 21cm measurements of various resolution and of Planck results. High spatial resolution observations of the GFISM may be important recalculating the physical parameters of gamma-ray burst (GRB) host galaxies using the updated foreground parameters.

  3. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  4. The impacts of non-renewable and renewable energy on CO2 emissions in Turkey.

    PubMed

    Bulut, Umit

    2017-06-01

    As a result of great increases in CO 2 emissions in the last few decades, many papers have examined the relationship between renewable energy and CO 2 emissions in the energy economics literature, because as a clean energy source, renewable energy can reduce CO 2 emissions and solve environmental problems stemming from increases in CO 2 emissions. When one analyses these papers, he/she will observe that they employ fixed parameter estimation methods, and time-varying effects of non-renewable and renewable energy consumption/production on greenhouse gas emissions are ignored. In order to fulfil this gap in the literature, this paper examines the effects of non-renewable and renewable energy on CO 2 emissions in Turkey over the period 1970-2013 by employing fixed parameter and time-varying parameter estimation methods. Estimation methods reveal that CO 2 emissions are positively related to non-renewable energy and renewable energy in Turkey. Since policy makers expect renewable energy to decrease CO 2 emissions, this paper argues that renewable energy is not able to satisfy the expectations of policy makers though fewer CO 2 emissions arise through production of electricity using renewable sources. In conclusion, the paper argues that policy makers should implement long-term energy policies in Turkey.

  5. SNPGenie: estimating evolutionary parameters to detect natural selection using pooled next-generation sequencing data.

    PubMed

    Nelson, Chase W; Moncla, Louise H; Hughes, Austin L

    2015-11-15

    New applications of next-generation sequencing technologies use pools of DNA from multiple individuals to estimate population genetic parameters. However, no publicly available tools exist to analyse single-nucleotide polymorphism (SNP) calling results directly for evolutionary parameters important in detecting natural selection, including nucleotide diversity and gene diversity. We have developed SNPGenie to fill this gap. The user submits a FASTA reference sequence(s), a Gene Transfer Format (.GTF) file with CDS information and a SNP report(s) in an increasing selection of formats. The program estimates nucleotide diversity, distance from the reference and gene diversity. Sites are flagged for multiple overlapping reading frames, and are categorized by polymorphism type: nonsynonymous, synonymous, or ambiguous. The results allow single nucleotide, single codon, sliding window, whole gene and whole genome/population analyses that aid in the detection of positive and purifying natural selection in the source population. SNPGenie version 1.2 is a Perl program with no additional dependencies. It is free, open-source, and available for download at https://github.com/hugheslab/snpgenie. nelsoncw@email.sc.edu or austin@biol.sc.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  7. Data Sources for Estimating Environment-Related Diseases

    PubMed Central

    Walker, Bailus

    1984-01-01

    Relating current morbidity and mortality to environmental and occupational factors requires information on parameters of environmental exposure for practitioners of medicine and other health scientists. A fundamental source of that information is the exposure history recorded in hospitals, clinics, and other points of entry to the health care system. The qualitative and quantitative aspects of this issue are reviewed. PMID:6716500

  8. Estimation of the dynamics and rate of transmission of classical swine fever (hog cholera) in wild pigs.

    PubMed Central

    Hone, J.; Pech, R.; Yip, P.

    1992-01-01

    Infectious diseases establish in a population of wildlife hosts when the number of secondary infections is greater than or equal to one. To estimate whether establishment will occur requires extensive experience or a mathematical model of disease dynamics and estimates of the parameters of the disease model. The latter approach is explored here. Methods for estimating key model parameters, the transmission coefficient (beta) and the basic reproductive rate (RDRS), are described using classical swine fever (hog cholera) in wild pigs as an example. The tentative results indicate that an acute infection of classical swine fever will establish in a small population of wild pigs. Data required for estimation of disease transmission rates are reviewed and sources of bias and alternative methods discussed. A comprehensive evaluation of the biases and efficiencies of the methods is needed. PMID:1582476

  9. World Energy Projection System Plus Model Documentation: Coal Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Coal Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  10. World Energy Projection System Plus Model Documentation: Transportation Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) International Transportation model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  11. World Energy Projection System Plus Model Documentation: Residential Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Residential Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  12. World Energy Projection System Plus Model Documentation: Refinery Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Refinery Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  13. World Energy Projection System Plus Model Documentation: Main Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Main Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  14. Transportation Sector Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model.

  15. World Energy Projection System Plus Model Documentation: Electricity Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Electricity Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  16. Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions

    PubMed Central

    Barrett, Harrison H.; Dainty, Christopher; Lara, David

    2008-01-01

    Maximum-likelihood (ML) estimation in wavefront sensing requires careful attention to all noise sources and all factors that influence the sensor data. We present detailed probability density functions for the output of the image detector in a wavefront sensor, conditional not only on wavefront parameters but also on various nuisance parameters. Practical ways of dealing with nuisance parameters are described, and final expressions for likelihoods and Fisher information matrices are derived. The theory is illustrated by discussing Shack–Hartmann sensors, and computational requirements are discussed. Simulation results show that ML estimation can significantly increase the dynamic range of a Shack–Hartmann sensor with four detectors and that it can reduce the residual wavefront error when compared with traditional methods. PMID:17206255

  17. Generation of dynamo waves by spatially separated sources in the Earth and other celestial bodies

    NASA Astrophysics Data System (ADS)

    Popova, E.

    2017-12-01

    The amplitude and the spatial configuration of the planetary and stellar magnetic field can changing over the years. Celestial bodies can have cyclic, chaotic or unchanging in time magnetic activity which is connected with a dynamo mechanism. This mechanism is based on the consideration of the joint influence of the alpha-effect and differential rotation. Dynamo sources can be located at different depths (active layers) of the celestial body and can have different intensities. Application of this concept allows us to get different forms of solutions and some of which can include wave propagating inside the celestial body. We analytically showed that in the case of spatially separated sources of magnetic field each source generates a wave whose frequency depends on the physical parameters of its source. We estimated parameters of sources required for the generation nondecaying waves. We discus structure of such sources and matter motion (including meridional circulation) in the liquid outer core of the Earth and active layers of other celestial bodies.

  18. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  19. Simultaneous inversion of intrinsic and scattering attenuation parameters incorporating multiple scattering effect

    NASA Astrophysics Data System (ADS)

    Ogiso, M.

    2017-12-01

    Heterogeneous attenuation structure is important for not only understanding the earth structure and seismotectonics, but also ground motion prediction. Attenuation of ground motion in high frequency range is often characterized by the distribution of intrinsic and scattering attenuation parameters (intrinsic Q and scattering coefficient). From the viewpoint of ground motion prediction, both intrinsic and scattering attenuation affect the maximum amplitude of ground motion while scattering attenuation also affect the duration time of ground motion. Hence, estimation of both attenuation parameters will lead to sophisticate the ground motion prediction. In this study, we try to estimate both parameters in southwestern Japan in a tomographic manner. We will conduct envelope fitting of seismic coda since coda has sensitivity to both intrinsic attenuation and scattering coefficients. Recently, Takeuchi (2016) successfully calculated differential envelope when these parameters have fluctuations. We adopted his equations to calculate partial derivatives of these parameters since we did not need to assume homogeneous velocity structure. Matrix for inversion of structural parameters would become too huge to solve in a straightforward manner. Hence, we adopted ART-type Bayesian Reconstruction Method (Hirahara, 1998) to project the difference of envelopes to structural parameters iteratively. We conducted checkerboard reconstruction test. We assumed checkerboard pattern of 0.4 degree interval in horizontal direction and 20 km in depth direction. Reconstructed structures well reproduced the assumed pattern in shallower part while not in deeper part. Since the inversion kernel has large sensitivity around source and stations, resolution in deeper part would be limited due to the sparse distribution of earthquakes. To apply the inversion method which described above to actual waveforms, we have to correct the effects of source and site amplification term. We consider these issues to estimate the actual intrinsic and scattering structures of the target region.Acknowledgment We used the waveforms of Hi-net, NIED. This study was supported by the Earthquake Research Institute of the University of Tokyo cooperative research program.

  20. Explanation of temporal clustering of tsunami sources using the epidemic-type aftershock sequence model

    USGS Publications Warehouse

    Geist, Eric L.

    2014-01-01

    Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions formore » the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.« less

  2. The Atacama Cosmology Telescope: Likelihood for Small-Scale CMB Data

    NASA Technical Reports Server (NTRS)

    Dunkley, J.; Calabrese, E.; Sievers, J.; Addison, G. E.; Battaglia, N.; Battistelli, E. S.; Bond, J. R.; Das, S.; Devlin, M. J.; Dunner, R.; hide

    2013-01-01

    The Atacama Cosmology Telescope has measured the angular power spectra of microwave fluctuations to arcminute scales at frequencies of 148 and 218 GHz, from three seasons of data. At small scales the fluctuations in the primordial Cosmic Microwave Background (CMB) become increasingly obscured by extragalactic foregounds and secondary CMB signals. We present results from a nine-parameter model describing these secondary effects, including the thermal and kinematic Sunyaev-Zel'dovich (tSZ and kSZ) power; the clustered and Poisson-like power from Cosmic Infrared Background (CIB) sources, and their frequency scaling; the tSZ-CIB correlation coefficient; the extragalactic radio source power; and thermal dust emission from Galactic cirrus in two different regions of the sky. In order to extract cosmological parameters, we describe a likelihood function for the ACT data, fitting this model to the multi-frequency spectra in the multipole range 500 < l < 10000. We extend the likelihood to include spectra from the South Pole Telescope at frequencies of 95, 150, and 220 GHz. Accounting for different radio source levels and Galactic cirrus emission, the same model provides an excellent fit to both datasets simultaneously, with ?2/dof= 675/697 for ACT, and 96/107 for SPT. We then use the multi-frequency likelihood to estimate the CMB power spectrum from ACT in bandpowers, marginalizing over the secondary parameters. This provides a simplified 'CMB-only' likelihood in the range 500 < l < 3500 for use in cosmological parameter estimation

  3. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  4. The effect of Earth's oblateness on the seismic moment estimation from satellite gravimetry

    NASA Astrophysics Data System (ADS)

    Dai, Chunli; Guo, Junyi; Shang, Kun; Shum, C. K.; Wang, Rongjiang

    2018-05-01

    Over the last decade, satellite gravimetry, as a new class of geodetic sensors, has been increasingly studied for its use in improving source model inversion for large undersea earthquakes. When these satellite-observed gravity change data are used to estimate source parameters such as seismic moment, the forward modelling of earthquake seismic deformation is crucial because imperfect modelling could lead to errors in the resolved source parameters. Here, we discuss several modelling issues and focus on one modelling deficiency resulting from the upward continuation of gravity change considering the Earth's oblateness, which is ignored in contemporary studies. For the low degree (degree 60) time-variable gravity solutions from Gravity Recovery and Climate Experiment mission data, the model-predicted gravity change would be overestimated by 9 per cent for the 2011 Tohoku earthquake, and about 6 per cent for the 2010 Maule earthquake. For high degree gravity solutions, the model-predicted gravity change at degree 240 would be overestimated by 30 per cent for the 2011 Tohoku earthquake, resulting in the seismic moment to be systematically underestimated by 30 per cent.

  5. Inter-Individual Variability in High-Throughput Risk ...

    EPA Pesticide Factsheets

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion

  6. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  7. Benefit-cost estimation for alternative drinking water maximum contaminant levels

    NASA Astrophysics Data System (ADS)

    Gurian, Patrick L.; Small, Mitchell J.; Lockwood, John R.; Schervish, Mark J.

    2001-08-01

    A simulation model for estimating compliance behavior and resulting costs at U.S. Community Water Suppliers is developed and applied to the evaluation of a more stringent maximum contaminant level (MCL) for arsenic. Probability distributions of source water arsenic concentrations are simulated using a statistical model conditioned on system location (state) and source water type (surface water or groundwater). This model is fit to two recent national surveys of source waters, then applied with the model explanatory variables for the population of U.S. Community Water Suppliers. Existing treatment types and arsenic removal efficiencies are also simulated. Utilities with finished water arsenic concentrations above the proposed MCL are assumed to select the least cost option compatible with their existing treatment from among 21 available compliance strategies and processes for meeting the standard. Estimated costs and arsenic exposure reductions at individual suppliers are aggregated to estimate the national compliance cost, arsenic exposure reduction, and resulting bladder cancer risk reduction. Uncertainties in the estimates are characterized based on uncertainties in the occurrence model parameters, existing treatment types, treatment removal efficiencies, costs, and the bladder cancer dose-response function for arsenic.

  8. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  9. FRAGS: estimation of coding sequence substitution rates from fragmentary data

    PubMed Central

    Swart, Estienne C; Hide, Winston A; Seoighe, Cathal

    2004-01-01

    Background Rates of substitution in protein-coding sequences can provide important insights into evolutionary processes that are of biomedical and theoretical interest. Increased availability of coding sequence data has enabled researchers to estimate more accurately the coding sequence divergence of pairs of organisms. However the use of different data sources, alignment protocols and methods to estimate substitution rates leads to widely varying estimates of key parameters that define the coding sequence divergence of orthologous genes. Although complete genome sequence data are not available for all organisms, fragmentary sequence data can provide accurate estimates of substitution rates provided that an appropriate and consistent methodology is used and that differences in the estimates obtainable from different data sources are taken into account. Results We have developed FRAGS, an application framework that uses existing, freely available software components to construct in-frame alignments and estimate coding substitution rates from fragmentary sequence data. Coding sequence substitution estimates for human and chimpanzee sequences, generated by FRAGS, reveal that methodological differences can give rise to significantly different estimates of important substitution parameters. The estimated substitution rates were also used to infer upper-bounds on the amount of sequencing error in the datasets that we have analysed. Conclusion We have developed a system that performs robust estimation of substitution rates for orthologous sequences from a pair of organisms. Our system can be used when fragmentary genomic or transcript data is available from one of the organisms and the other is a completely sequenced genome within the Ensembl database. As well as estimating substitution statistics our system enables the user to manage and query alignment and substitution data. PMID:15005802

  10. Near-Field Source Localization by Using Focusing Technique

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Wang, Yide; Saillard, Joseph

    2008-12-01

    We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.

  11. On the power output of some idealized source configurations with one or more characteristic dimensions

    NASA Technical Reports Server (NTRS)

    Levine, H.

    1982-01-01

    The calculation of power output from a (finite) linear array of equidistant point sources is investigated with allowance for a relative phase shift and particular focus on the circumstances of small/large individual source separation. A key role is played by the estimates found for a twin parameter definite integral that involves the Fejer kernel functions, where N denotes a (positive) integer; these results also permit a quantitative accounting of energy partition between the principal and secondary lobes of the array pattern. Continuously distributed sources along a finite line segment or an open ended circular cylindrical shell are considered, and estimates for the relatively lower output in the latter configuration are made explicit when the shell radius is small compared to the wave length. A systematic reduction of diverse integrals which characterize the energy output from specific line and strip sources is investigated.

  12. Accuracy of Estimating Highly Eccentric Binary Black Hole Parameters with Gravitational-wave Detections

    NASA Astrophysics Data System (ADS)

    Gondán, László; Kocsis, Bence; Raffai, Péter; Frei, Zsolt

    2018-03-01

    Mergers of stellar-mass black holes on highly eccentric orbits are among the targets for ground-based gravitational-wave detectors, including LIGO, VIRGO, and KAGRA. These sources may commonly form through gravitational-wave emission in high-velocity dispersion systems or through the secular Kozai–Lidov mechanism in triple systems. Gravitational waves carry information about the binaries’ orbital parameters and source location. Using the Fisher matrix technique, we determine the measurement accuracy with which the LIGO–VIRGO–KAGRA network could measure the source parameters of eccentric binaries using a matched filtering search of the repeated burst and eccentric inspiral phases of the waveform. We account for general relativistic precession and the evolution of the orbital eccentricity and frequency during the inspiral. We find that the signal-to-noise ratio and the parameter measurement accuracy may be significantly higher for eccentric sources than for circular sources. This increase is sensitive to the initial pericenter distance, the initial eccentricity, and the component masses. For instance, compared to a 30 {M}ȯ –30 {M}ȯ non-spinning circular binary, the chirp mass and sky-localization accuracy can improve by a factor of ∼129 (38) and ∼2 (11) for an initially highly eccentric binary assuming an initial pericenter distance of 20 M tot (10 M tot).

  13. Measurement of the PPN parameter γ by testing the geometry of near-Earth space

    NASA Astrophysics Data System (ADS)

    Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang

    2016-06-01

    The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.

  14. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    PubMed

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Possibility of determining cosmological parameters from measurements of gravitational waves emitted by coalescing, compact binaries

    NASA Astrophysics Data System (ADS)

    Marković, Dragoljub

    1993-11-01

    We explore the feasibility of using LIGO and/or VIRGO gravitational-wave measurements of coalescing, neutron-star-neutron-star (NS-NS) binaries and black-hole-neutron-star (BH-NS) binaries at cosmological distances to determine the cosmological parameters of our Universe. From the observed gravitational waveforms one can infer, as direct observables, the luminosity distance D of the source and the binary's two ``redshifted masses,'' M'1≡M1(1+z) and M'2≡M2(1+z), where Mi are the actual masses and z≡Δλ/λ is the binary's cosmological redshift. Assuming that the NS mass spectrum is sharply peaked about 1.4Msolar, as binary pulsar and x-ray source observations suggest, the redshift can be estimated as z=M'NS/1.4Msolar-1. The actual distance-redshift relation D(z) for our Universe is strongly dependent on its cosmological parameters [the Hubble constant H0, or h0≡H0/100 km s-1Mpc-1, the mean mass density ρm, or density parameter Ω0≡(8π/3H20)ρm, and the cosmological constant, Λ, or λ0≡Λ/(3H20)], so by a statistical study of (necessarily noisy) measurements of D and z for a large number of binaries, one can deduce the cosmological parameters. The various noise sources that will plague such a cosmological study are discussed and estimated, and the accuracies of the inferred parameters are determined as functions of the detectors' noise characteristics, the number of binaries observed, and the neutron-star mass spectrum. The dominant source of error is the detectors' intrinsic noise, though stochastic gravitational lensing of the waves by intervening matter might significantly influence the inferred cosmological constant λ0, when the detectors reach ``advanced'' stages of development. The estimated errors of parameters inferred from BH-NS measurements can be described by the following rough analytic fits: Δh0/h0~=0.02(N/h0)(τR)-1/2 (for N/h0<~2), where N is the detector's noise level (strain/Hz) in units of the ``advanced LIGO'' noise level, R is the event rate in units of the best-estimate value, 100 yr-1 Gpc-3, and τ is the observation time in years. In a ``high density'' universe (Ω0=1, λ0=0) ΔΩ0~=0.3(N/h0)2(τR)-1/2, Δλ0~=0.4(N/h0)1.5(τR)-1/2, for N/h0<~1. In a ``low density'' universe (Ω0=0.2, λ0=0), ΔΩ0~=0.5(N/h0)3(τR)-1/2, Δλ0~=0.7(N/h0)2.5(τR)-1/2, also for N/h0<~1. These formulas indicate that, if event rates are those currently estimated (~3 per year out to 200 Mpc), then when the planned LIGO and/or VIRGO detectors get to be about as sensitive as the so-called ``advanced detector level'' (presumably in the early 2000s), interesting cosmological measurements can begin.

  16. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  17. World Energy Projection System Plus Model Documentation: Greenhouse Gases Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Greenhouse Gases Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  18. World Energy Projection System Plus Model Documentation: Natural Gas Module

    EIA Publications

    2011-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) Natural Gas Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  19. PROGRAM PARAMS USERS GUIDE

    EPA Science Inventory

    PARAMS is a Windows-based computer program that implements 30 methods for estimating the parameters in indoor emissions source models, which are an essential component of indoor air quality (IAQ) and exposure models. These methods fall into eight categories: (1) the properties o...

  20. World Energy Projection System Plus Model Documentation: District Heat Module

    EIA Publications

    2017-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) District Heat Model. It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  1. World Energy Projection System Plus Model Documentation: Industrial Module

    EIA Publications

    2016-01-01

    This report documents the objectives, analytical approach and development of the World Energy Projection System Plus (WEPS ) World Industrial Model (WIM). It also catalogues and describes critical assumptions, computational methodology, parameter estimation techniques, and model source code.

  2. The rotation-powered nature of some soft gamma-ray repeaters and anomalous X-ray pulsars

    NASA Astrophysics Data System (ADS)

    Coelho, Jaziel G.; Cáceres, D. L.; de Lima, R. C. R.; Malheiro, M.; Rueda, J. A.; Ruffini, R.

    2017-03-01

    Context. Soft gamma-ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs) are slow rotating isolated pulsars whose energy reservoir is still matter of debate. Adopting neutron star (NS) fiducial parameters; mass M = 1.4 M⊙, radius R = 10 km, and moment of inertia, I = 1045 g cm2, the rotational energy loss, Ėrot, is lower than the observed luminosity (dominated by the X-rays) LX for many of the sources. Aims: We investigate the possibility that some members of this family could be canonical rotation-powered pulsars using realistic NS structure parameters instead of fiducial values. Methods: We compute the NS mass, radius, moment of inertia and angular momentum from numerical integration of the axisymmetric general relativistic equations of equilibrium. We then compute the entire range of allowed values of the rotational energy loss, Ėrot, for the observed values of rotation period P and spin-down rate Ṗ. We also estimate the surface magnetic field using a general relativistic model of a rotating magnetic dipole. Results: We show that realistic NS parameters lowers the estimated value of the magnetic field and radiation efficiency, LX/Ėrot, with respect to estimates based on fiducial NS parameters. We show that nine SGRs/AXPs can be described as canonical pulsars driven by the NS rotational energy, for LX computed in the soft (2-10 keV) X-ray band. We compute the range of NS masses for which LX/Ėrot< 1. We discuss the observed hard X-ray emission in three sources of the group of nine potentially rotation-powered NSs. This additional hard X-ray component dominates over the soft one leading to LX/Ėrot > 1 in two of them. Conclusions: We show that 9 SGRs/AXPs can be rotation-powered NSs if we analyze their X-ray luminosity in the soft 2-10 keV band. Interestingly, four of them show radio emission and six have been associated with supernova remnants (including Swift J1834.9-0846 the first SGR observed with a surrounding wind nebula). These observations give additional support to our results of a natural explanation of these sources in terms of ordinary pulsars. Including the hard X-ray emission observed in three sources of the group of potential rotation-powered NSs, this number of sources with LX/Ėrot< 1 becomes seven. It remains open to verification 1) the accuracy of the estimated distances and 2) the possible contribution of the associated supernova remnants to the hard X-ray emission.

  3. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  4. Electromagnetic Characterization of Inhomogeneous Media

    DTIC Science & Technology

    2012-03-22

    Engineering and Management Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the Requirements...found in the laboratory data, fun is the code that contains the theatrical formulation of S11, and beta0 is the initial constitutive parameter estimate...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources

  5. Human Systems Integration (HSI) in Acquisition. HSI Domain Guide

    DTIC Science & Technology

    2009-08-01

    job simulation that includes posture data , force parameters, and anthropometry . Output includes the percentage of men and women who have the strength...information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of

  6. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    NASA Astrophysics Data System (ADS)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  7. Assimilating multi-source uncertainties of a parsimonious conceptual hydrological model using hierarchical Bayesian modeling

    Treesearch

    Wei Wu; James Clark; James Vose

    2010-01-01

    Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model – GR4J – by coherently assimilating the uncertainties from the...

  8. Building a Predictive Capability for Decision-Making that Supports MultiPEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmichael, Joshua Daniel

    Multi-phenomenological explosion monitoring (multiPEM) is a developing science that uses multiple geophysical signatures of explosions to better identify and characterize their sources. MultiPEM researchers seek to integrate explosion signatures together to provide stronger detection, parameter estimation, or screening capabilities between different sources or processes. This talk will address forming a predictive capability for screening waveform explosion signatures to support multiPEM.

  9. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  10. Source Parameters and High Frequency Characteristics of Local Events (0.5 ≤ M L ≤ 2.9) Around Bilaspur Region of the Himachal Himalaya

    NASA Astrophysics Data System (ADS)

    Vandana; Kumar, Ashwani; Gupta, S. C.; Mishra, O. P.; Kumar, Arjun; Sandeep

    2017-04-01

    Source parameters of 41 local events (0.5 ≤ M L ≤ 2.9) occurred around Bilaspur region of the Himachal Lesser Himalaya from May 2013 to March 2014 have been estimated adopting Brune model. The estimated source parameters include seismic moments ( M o), source radii ( r), and stress drops (Δ σ), and found to vary from 4.9 × 1019 to 7 × 1021 dyne-cm, about 187-518 m and less than 1 bar to 51 bars, respectively. The decay of high frequency acceleration spectra at frequencies above f max has been modelled using two functions: a high-cut filter and κ factor. Stress drops of 11 events, with M 0 between 1 × 1021 and 7 × 1021 dyne-cm, vary from 11 bars to 51 bars with an average of 22 bars. From the variation of the maximum stress drop with focal depth it appears that the strength of the upper crust decreases below 20 km. A scaling law M 0 = 2 × 1022 f c -3.03 between M 0, and corner frequency (f c), has been developed for the region. This law almost agrees with that for the Kameng region of the Arunachal Lesser Himalaya. f c is found to be source dependent whereas f max is source independent and seems to indicate that the size of the cohesive zone is not sensitive to the earthquake size. At four sites f max is found to vary from 14 to 23, 11 to 19, 9 to 23 and 4 to 11 Hz, respectively. The κ is found to vary from 0.01 to 0.035 s with an average of 0.02 s. This range of variation is a large compared to the κ variation between 0.023 and 0.07 s for the Garhwal and Kumaon Himalaya. For various regions of the world, the κ varies over a broad range from 0.003 to 0.08 s, and for the Bilaspur region the κ estimates are found to be consistent with other regions of the world.

  11. Approaches to highly parameterized inversion: Pilot-point theory, guidelines, and research directions

    USGS Publications Warehouse

    Doherty, John E.; Fienen, Michael N.; Hunt, Randall J.

    2011-01-01

    Pilot points have been used in geophysics and hydrogeology for at least 30 years as a means to bridge the gap between estimating a parameter value in every cell of a model and subdividing models into a small number of homogeneous zones. Pilot points serve as surrogate parameters at which values are estimated in the inverse-modeling process, and their values are interpolated onto the modeling domain in such a way that heterogeneity can be represented at a much lower computational cost than trying to estimate parameters in every cell of a model. Although the use of pilot points is increasingly common, there are few works documenting the mathematical implications of their use and even fewer sources of guidelines for their implementation in hydrogeologic modeling studies. This report describes the mathematics of pilot-point use, provides guidelines for their use in the parameter-estimation software suite (PEST), and outlines several research directions. Two key attributes for pilot-point definitions are highlighted. First, the difference between the information contained in the every-cell parameter field and the surrogate parameter field created using pilot points should be in the realm of parameters which are not informed by the observed data (the null space). Second, the interpolation scheme for projecting pilot-point values onto model cells ideally should be orthogonal. These attributes are informed by the mathematics and have important ramifications for both the guidelines and suggestions for future research.

  12. Offline Performance of the Filter Bank EEW Algorithm in the 2014 M6.0 South Napa Earthquake

    NASA Astrophysics Data System (ADS)

    Meier, M. A.; Heaton, T. H.; Clinton, J. F.

    2014-12-01

    Medium size events like the M6.0 South Napa earthquake are very challenging for EEW: the damage such events produce can be severe, but it is generally confined to relatively small zones around the epicenter and the shaking duration is short. This leaves a very short window for timely EEW alerts. Algorithms that wait for several stations to trigger before sending out EEW alerts are typically not fast enough for these kind of events because their blind zone (the zone where strong ground motions start before the warnings arrive) typically covers all or most of the area that experiences strong ground motions. At the same time, single station algorithms are often too unreliable to provide useful alerts. The filter bank EEW algorithm is a new algorithm that is designed to provide maximally accurate and precise earthquake parameter estimates with minimum data input, with the goal of producing reliable EEW alerts when only a very small number of stations have been reached by the p-wave. It combines the strengths of single station and network based algorithms in that it starts parameter estimates as soon as 0.5 seconds of data are available from the first station, but then perpetually incorporates additional data from the same or from any number of other stations. The algorithm analyzes the time dependent frequency content of real time waveforms with a filter bank. It then uses an extensive training data set to find earthquake records from the past that have had similar frequency content at a given time since the p-wave onset. The source parameters of the most similar events are used to parameterize a likelihood function for the source parameters of the ongoing event, which can then be maximized to find the most likely parameter estimates. Our preliminary results show that the filter bank EEW algorithm correctly estimated the magnitude of the South Napa earthquake to be ~M6 with only 1 second worth of data at the nearest station to the epicenter. This estimate is then confirmed when updates based on more data from stations at farther distances become available. Because these early estimates saturate at ~M6.5, however, the magnitude estimate might have had to be considered a minimum bound.

  13. Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; Sigloch, Karin

    2016-11-01

    Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.

  14. Estimating unknown input parameters when implementing the NGA ground-motion prediction equations in engineering practice

    USGS Publications Warehouse

    Kaklamanos, James; Baise, Laurie G.; Boore, David M.

    2011-01-01

    The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.

  15. Etalon (standard) for surface potential distribution produced by electric activity of the heart.

    PubMed

    Szathmáry, V; Ruttkay-Nedecký, I

    1981-01-01

    The authors submit etalon (standard) equipotential maps as an aid in the evaluation of maps of surface potential distributions in living subjects. They were obtained by measuring potentials on the surface of an electrolytic tank shaped like the thorax. The individual etalon maps were determined in such a way that the parameters of the physical dipole forming the source of the electric field in the tank corresponded to the mean vectorcardiographic parameters measured in a healthy population sample. The technique also allows a quantitative estimate of the degree of non-dipolarity of the heart as the source of the electric field.

  16. Observation model and parameter partials for the JPL VLBI parameter estimation software MODEST/1991

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1991-01-01

    A revision is presented of MASTERFIT-1987, which it supersedes. Changes during 1988 to 1991 included introduction of the octupole component of solid Earth tides, the NUVEL tectonic motion model, partial derivatives for the precession constant and source position rates, the option to correct for source structure, a refined model for antenna offsets, modeling the unique antenna at Richmond, FL, improved nutation series due to Zhu, Groten, and Reigber, and reintroduction of the old (Woolard) nutation series for simulation purposes. Text describing the relativistic transformations and gravitational contributions to the delay model was also revised in order to reflect the computer code more faithfully.

  17. Quantitative estimation of minimum offset for multichannel surface-wave survey with actively exciting source

    USGS Publications Warehouse

    Xu, Y.; Xia, J.; Miller, R.D.

    2006-01-01

    Multichannel analysis of surface waves is a developing method widely used in shallow subsurface investigations. The field procedures and related parameters are very important for successful applications. Among these parameters, the source-receiver offset range is seldom discussed in theory and normally determined by empirical or semi-quantitative methods in current practice. This paper discusses the problem from a theoretical perspective. A formula for quantitatively evaluating a layered homogenous elastic model was developed. The analytical results based on simple models and experimental data demonstrate that the formula is correct for surface wave surveys for near-surface applications. ?? 2005 Elsevier B.V. All rights reserved.

  18. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  19. Mass discharge estimation from contaminated sites: Multi-model solutions for assessment of conceptual uncertainty

    NASA Astrophysics Data System (ADS)

    Thomsen, N. I.; Troldborg, M.; McKnight, U. S.; Binning, P. J.; Bjerg, P. L.

    2012-04-01

    Mass discharge estimates are increasingly being used in the management of contaminated sites. Such estimates have proven useful for supporting decisions related to the prioritization of contaminated sites in a groundwater catchment. Potential management options can be categorised as follows: (1) leave as is, (2) clean up, or (3) further investigation needed. However, mass discharge estimates are often very uncertain, which may hamper the management decisions. If option 1 is incorrectly chosen soil and water quality will decrease, threatening or destroying drinking water resources. The risk of choosing option 2 is to spend money on remediating a site that does not pose a problem. Choosing option 3 will often be safest, but may not be the optimal economic solution. Quantification of the uncertainty in mass discharge estimates can therefore greatly improve the foundation for selecting the appropriate management option. The uncertainty of mass discharge estimates depends greatly on the extent of the site characterization. A good approach for uncertainty estimation will be flexible with respect to the investigation level, and account for both parameter and conceptual model uncertainty. We propose a method for quantifying the uncertainty of dynamic mass discharge estimates from contaminant point sources on the local scale. The method considers both parameter and conceptual uncertainty through a multi-model approach. The multi-model approach evaluates multiple conceptual models for the same site. The different conceptual models consider different source characterizations and hydrogeological descriptions. The idea is to include a set of essentially different conceptual models where each model is believed to be realistic representation of the given site, based on the current level of information. Parameter uncertainty is quantified using Monte Carlo simulations. For each conceptual model we calculate a transient mass discharge estimate with uncertainty bounds resulting from the parametric uncertainty. To quantify the conceptual uncertainty from a given site, we combine the outputs from the different conceptual models using Bayesian model averaging. The weight for each model is obtained by integrating available data and expert knowledge using Bayesian belief networks. The multi-model approach is applied to a contaminated site. At the site a DNAPL (dense non aqueous phase liquid) spill consisting of PCE (perchloroethylene) has contaminated a fractured clay till aquitard overlaying a limestone aquifer. The exact shape and nature of the source is unknown and so is the importance of transport in the fractures. The result of the multi-model approach is a visual representation of the uncertainty of the mass discharge estimates for the site which can be used to support the management options.

  20. Uncertainty in predictions of forest carbon dynamics: separating driver error from model error.

    PubMed

    Spadavecchia, L; Williams, M; Law, B E

    2011-07-01

    We present an analysis of the relative magnitude and contribution of parameter and driver uncertainty to the confidence intervals on estimates of net carbon fluxes. Model parameters may be difficult or impractical to measure, while driver fields are rarely complete, with data gaps due to sensor failure and sparse observational networks. Parameters are generally derived through some optimization method, while driver fields may be interpolated from available data sources. For this study, we used data from a young ponderosa pine stand at Metolius, Central Oregon, and a simple daily model of coupled carbon and water fluxes (DALEC). An ensemble of acceptable parameterizations was generated using an ensemble Kalman filter and eddy covariance measurements of net C exchange. Geostatistical simulations generated an ensemble of meteorological driving variables for the site, consistent with the spatiotemporal autocorrelations inherent in the observational data from 13 local weather stations. Simulated meteorological data were propagated through the model to derive the uncertainty on the CO2 flux resultant from driver uncertainty typical of spatially extensive modeling studies. Furthermore, the model uncertainty was partitioned between temperature and precipitation. With at least one meteorological station within 25 km of the study site, driver uncertainty was relatively small ( 10% of the total net flux), while parameterization uncertainty was larger, 50% of the total net flux. The largest source of driver uncertainty was due to temperature (8% of the total flux). The combined effect of parameter and driver uncertainty was 57% of the total net flux. However, when the nearest meteorological station was > 100 km from the study site, uncertainty in net ecosystem exchange (NEE) predictions introduced by meteorological drivers increased by 88%. Precipitation estimates were a larger source of bias in NEE estimates than were temperature estimates, although the biases partly compensated for each other. The time scales on which precipitation errors occurred in the simulations were shorter than the temporal scales over which drought developed in the model, so drought events were reasonably simulated. The approach outlined here provides a means to assess the uncertainty and bias introduced by meteorological drivers in regional-scale ecological forecasting.

  1. Application of empirical and dynamical closure methods to simple climate models

    NASA Astrophysics Data System (ADS)

    Padilla, Lauren Elizabeth

    This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.

  2. PAGER-CAT: A composite earthquake catalog for calibrating global fatality models

    USGS Publications Warehouse

    Allen, T.I.; Marano, K.D.; Earle, P.S.; Wald, D.J.

    2009-01-01

    We have described the compilation and contents of PAGER-CAT, an earthquake catalog developed principally for calibrating earthquake fatality models. It brings together information from a range of sources in a comprehensive, easy to use digital format. Earthquake source information (e.g., origin time, hypocenter, and magnitude) contained in PAGER-CAT has been used to develop an Atlas of Shake Maps of historical earthquakes (Allen et al. 2008) that can subsequently be used to estimate the population exposed to various levels of ground shaking (Wald et al. 2008). These measures will ultimately yield improved earthquake loss models employing the uniform hazard mapping methods of ShakeMap. Currently PAGER-CAT does not consistently contain indicators of landslide and liquefaction occurrence prior to 1973. In future PAGER-CAT releases we plan to better document the incidence of these secondary hazards. This information is contained in some existing global catalogs but is far from complete and often difficult to parse. Landslide and liquefaction hazards can be important factors contributing to earthquake losses (e.g., Marano et al. unpublished). Consequently, the absence of secondary hazard indicators in PAGER-CAT, particularly for events prior to 1973, could be misleading to sorne users concerned with ground-shaking-related losses. We have applied our best judgment in the selection of PAGER-CAT's preferred source parameters and earthquake effects. We acknowledge the creation of a composite catalog always requires subjective decisions, but we believe PAGER-CAT represents a significant step forward in bringing together the best available estimates of earthquake source parameters and reports of earthquake effects. All information considered in PAGER-CAT is stored as provided in its native catalog so that other users can modify PAGER preferred parameters based on their specific needs or opinions. As with all catalogs, the values of some parameters listed in PAGER-CAT are highly uncertain, particularly the casualty numbers, which must be regarded as estimates rather than firm numbers for many earthquakes. Consequently, we encourage contributions from the seismology and earthquake engineering communities to further improve this resource via the Wikipedia page and personal communications, for the benefit of the whole community.

  3. Study of a generalized birks formula for the scintillation response of a CaMoO4 crystal

    NASA Astrophysics Data System (ADS)

    Lee, J. Y.; Kim, H. J.; Kang, Sang Jun; Lee, M. H.

    2017-12-01

    We have investigated the scintillation characteristics of CaMoO4 (CMO) crystals by using a gamma source and various internal alpha sources. A 137Cs source with 662-keV gamma-rays was used for the gamma-quanta light yield calibration. Internal radioactive contaminations provided alpha particles with different energies from 5.41 to 7.88 MeV. We developed a C++ program based on the ROOT package for the fitting of parameters in a generalized Birks semi-empirical formula by combining the experimental and the simulation data. Results for the fitted Birks parameters are k b1 = 3.3 × 10 -3 (g/MeVcm2) for the 1st parameter and k b2 = 7.9 × 10 -5 (g/MeVcm2)2 for the 2nd parameter. The χ2/n.d.f. (Number of Degree of Freedom) is calculated as 0.1/4. We were able to estimate the 238U and 234U contaminations in a CMO crystal by using the generalized Birks semi-empirical formula.

  4. Adding source positions to the IVS Combination

    NASA Astrophysics Data System (ADS)

    Bachmann, S.; Thaller, D.

    2016-12-01

    Simultaneous estimation of source positions, Earth orientation parameters (EOPs) and station positions in one common adjustment is crucial for a consistent generation of celestial and terrestrial reference frame (CRF and TRF, respectively). VLBI is the only technique to guarantee this consistency. Previous publications showed that the VLBI intra-technique combination could improve the quality of the EOPs and station coordinates compared to the individual contributions. By now, the combination of EOP and station coordinates is well established within the IVS and in combination with other space geodetic techniques (e.g. inter-technique combined TRF like the ITRF). Most of the contributing IVS Analysis Centers (AC) now provide source positions as a third parameter type (besides EOP and station coordinates), which have not been used for an operational combined solution yet. A strategy for the combination of source positions has been developed and integrated into the routine IVS combination. Investigations are carried out to compare the source positions derived from different IVS ACs with the combined estimates to verify whether the source positions are improved by the combination, as it has been proven for EOP and station coordinates. Furthermore, global solutions of source positions, i.e., so-called catalogues describing a CRF, are generated consistently with the TRF similar to the IVS operational combined quarterly solution. The combined solutions of the source positions time series and the consistently generated TRF and CRF are compared internally to the individual solutions of the ACs as well as to external CRF catalogues and TRFs. Additionally, comparisons of EOPs based on different CRF solutions are presented as an outlook for consistent EOP, CRF and TRF realizations.

  5. Time-Dependent Moment Tensors of the First Four Source Physics Experiments (SPE) Explosions

    NASA Astrophysics Data System (ADS)

    Yang, X.

    2015-12-01

    We use mainly vertical-component geophone data within 2 km from the epicenter to invert for time-dependent moment tensors of the first four SPE explosions: SPE-1, SPE-2, SPE-3 and SPE-4Prime. We employ a one-dimensional (1D) velocity model developed from P- and Rg-wave travel times for Green's function calculations. The attenuation structure of the model is developed from P- and Rg-wave amplitudes. We select data for the inversion based on the criterion that they show consistent travel times and amplitude behavior as those predicted by the 1D model. Due to limited azimuthal coverage of the sources and the mostly vertical-component-only nature of the dataset, only long-period, diagonal components of the moment tensors are well constrained. Nevertheless, the moment tensors, particularly their isotropic components, provide reasonable estimates of the long-period source amplitudes as well as estimates of corner frequencies, albeit with larger uncertainties. The estimated corner frequencies, however, are consistent with estimates from ratios of seismogram spectra from different explosions. These long-period source amplitudes and corner frequencies cannot be fit by classical P-wave explosion source models. The results motivate the development of new P-wave source models suitable for these chemical explosions. To that end, we fit inverted moment-tensor spectra by modifying the classical explosion model using regressions of estimated source parameters. Although the number of data points used in the regression is small, the approach suggests a way for the new-model development when more data are collected.

  6. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  7. Using Predictive Uncertainty Analysis to Assess Hydrologic Model Performance for a Watershed in Oregon

    NASA Astrophysics Data System (ADS)

    Brannan, K. M.; Somor, A.

    2016-12-01

    A variety of statistics are used to assess watershed model performance but these statistics do not directly answer the question: what is the uncertainty of my prediction. Understanding predictive uncertainty is important when using a watershed model to develop a Total Maximum Daily Load (TMDL). TMDLs are a key component of the US Clean Water Act and specify the amount of a pollutant that can enter a waterbody when the waterbody meets water quality criteria. TMDL developers use watershed models to estimate pollutant loads from nonpoint sources of pollution. We are developing a TMDL for bacteria impairments in a watershed in the Coastal Range of Oregon. We setup an HSPF model of the watershed and used the calibration software PEST to estimate HSPF hydrologic parameters and then perform predictive uncertainty analysis of stream flow. We used Monte-Carlo simulation to run the model with 1,000 different parameter sets and assess predictive uncertainty. In order to reduce the chance of specious parameter sets, we accounted for the relationships among parameter values by using mathematically-based regularization techniques and an estimate of the parameter covariance when generating random parameter sets. We used a novel approach to select flow data for predictive uncertainty analysis. We set aside flow data that occurred on days that bacteria samples were collected. We did not use these flows in the estimation of the model parameters. We calculated a percent uncertainty for each flow observation based 1,000 model runs. We also used several methods to visualize results with an emphasis on making the data accessible to both technical and general audiences. We will use the predictive uncertainty estimates in the next phase of our work, simulating bacteria fate and transport in the watershed.

  8. Comparison of maximum runup through analytical and numerical approaches for different fault parameters estimates

    NASA Astrophysics Data System (ADS)

    Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.

    2017-12-01

    The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.

  9. Constraining uncertainties in water supply reliability in a tropical data scarce basin

    NASA Astrophysics Data System (ADS)

    Kaune, Alexander; Werner, Micha; Rodriguez, Erasmo; de Fraiture, Charlotte

    2015-04-01

    Assessing the water supply reliability in river basins is essential for adequate planning and development of irrigated agriculture and urban water systems. In many cases hydrological models are applied to determine the surface water availability in river basins. However, surface water availability and variability is often not appropriately quantified due to epistemic uncertainties, leading to water supply insecurity. The objective of this research is to determine the water supply reliability in order to support planning and development of irrigated agriculture in a tropical, data scarce environment. The approach proposed uses a simple hydrological model, but explicitly includes model parameter uncertainty. A transboundary river basin in the tropical region of Colombia and Venezuela with an approximately area of 2100 km² was selected as a case study. The Budyko hydrological framework was extended to consider climatological input variability and model parameter uncertainty, and through this the surface water reliability to satisfy the irrigation and urban demand was estimated. This provides a spatial estimate of the water supply reliability across the basin. For the middle basin the reliability was found to be less than 30% for most of the months when the water is extracted from an upstream source. Conversely, the monthly water supply reliability was high (r>98%) in the lower basin irrigation areas when water was withdrawn from a source located further downstream. Including model parameter uncertainty provides a complete estimate of the water supply reliability, but that estimate is influenced by the uncertainty in the model. Reducing the uncertainty in the model through improved data and perhaps improved model structure will improve the estimate of the water supply reliability allowing better planning of irrigated agriculture and dependable water allocation decisions.

  10. A Theoretical Framework for Calibration in Computer Models: Parametrization, Estimation and Convergence Properties

    DOE PAGES

    Tuo, Rui; Jeff Wu, C. F.

    2016-07-19

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  11. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  12. Biased three-intensity decoy-state scheme on the measurement-device-independent quantum key distribution using heralded single-photon sources.

    PubMed

    Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin

    2018-02-19

    At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.

  13. Monte Carlo calculated TG-60 dosimetry parameters for the {beta}{sup -} emitter {sup 153}Sm brachytherapy source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadeghi, Mahdi; Taghdiri, Fatemeh; Hamed Hosseini, S.

    Purpose: The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for {beta} sources. Radioactive biocompatible and biodegradable {sup 153}Sm glass seed without encapsulation is a {beta}{sup -} emitter radionuclide with a short half-life and delivers a high dose rate to the tumor in the millimeter range. This study presents the results of Monte Carlo calculations of the dosimetric parameters for the {sup 153}Sm brachytherapy source. Methods: Version 5 of the (MCNP) Monte Carlo radiation transport code was used to calculate two-dimensional dose distributions around the source. The dosimetric parameters ofmore » AAPM TG-60 recommendations including the reference dose rate, the radial dose function, the anisotropy function, and the one-dimensional anisotropy function were obtained. Results: The dose rate value at the reference point was estimated to be 9.21{+-}0.6 cGy h{sup -1} {mu}Ci{sup -1}. Due to the low energy beta emitted from {sup 153}Sm sources, the dose fall-off profile is sharper than the other beta emitter sources. The calculated dosimetric parameters in this study are compared to several beta and photon emitting seeds. Conclusions: The results show the advantage of the {sup 153}Sm source in comparison with the other sources because of the rapid dose fall-off of beta ray and high dose rate at the short distances of the seed. The results would be helpful in the development of the radioactive implants using {sup 153}Sm seeds for the brachytherapy treatment.« less

  14. A New Network-Based Approach for the Earthquake Early Warning

    NASA Astrophysics Data System (ADS)

    Alessandro, C.; Zollo, A.; Colombelli, S.; Elia, L.

    2017-12-01

    Here we propose a new method which allows for issuing an early warning based upon the real-time mapping of the Potential Damage Zone (PDZ), e.g. the epicentral area where the peak ground velocity is expected to exceed the damaging or strong shaking levels with no assumption about the earthquake rupture extent and spatial variability of ground motion. The system includes the techniques for a refined estimation of the main source parameters (earthquake location and magnitude) and for an accurate prediction of the expected ground shaking level. The system processes the 3-component, real-time ground acceleration and velocity data streams at each station. For stations providing high quality data, the characteristic P-wave period (τc) and the P-wave displacement, velocity and acceleration amplitudes (Pd, Pv and Pa) are jointly measured on a progressively expanded P-wave time window. The evolutionary estimate of these parameters at stations around the source allow to predict the geometry and extent of PDZ, but also of the lower shaking intensity regions at larger epicentral distances. This is done by correlating the measured P-wave amplitude with the Peak Ground Velocity (PGV) and Instrumental Intensity (IMM) and by interpolating the measured and predicted P-wave amplitude at a dense spatial grid, including the nodes of the accelerometer/velocimeter array deployed in the earthquake source area. Depending of the network density and spatial source coverage, this method naturally accounts for effects related to the earthquake rupture extent (e.g. source directivity) and spatial variability of strong ground motion related to crustal wave propagation and site amplification. We have tested this system by a retrospective analysis of three earthquakes: 2016 Italy 6.5 Mw, 2008 Iwate-Miyagi 6.9 Mw and 2011 Tohoku 9.0 Mw. Source parameters characterization are stable and reliable, also the intensity map shows extended source effects consistent with kinematic fracture models of evets.

  15. Preliminary vulnerability evaluation by local tsunami and flood by Puerto Vallarta

    NASA Astrophysics Data System (ADS)

    Trejo-Gómez, E.; Nunez-Cornu, F. J.; Ortiz, M.; Escudero, C. R.; CA-UdG-276 Sisvoc

    2013-05-01

    Jalisco coast is susceptible to local tsunami due to the occurrence of large earthquakes. In 1932 occurred three by largest earthquakes. Evidence suggests that one of them caused by offshore subsidence of sediments deposited by Armeria River. For the tsunamis 1932 have not been studied the seismic source. On October 9, 1995, occurred a large earthquake (Mw= 8.0) producing a tsunami with run up height up ≤ 5 m. This event affected Tenacatita Bay and many small villages along the coast of Jalisco and Colima. Using seismic source parameters, we simulated 1995 tsunami and estimated the maximum wave height. We compared the our results with 20 field measures 20 taked during 1995 along the south cost of Jalisco State, from Chalacatepec to Barra de Navidad. Similar seismic source parameters used for tsunami 1995 simulation was used as reference for simulating a hypothetical seismic source front Puerto Vallarta. We assumed that the fracture occurs in the gap for the north cost of Jalisco. Ten sites were distributed to cover the Banderas Bay, as theoretical pressure sensors, were estimated the maximum wave height and time to arrived at cost. After we delimited zones hazard zones by floods on digital model terrain, a graphic scale 1:20,000. At the moment, we have already included information by hazard caused by hypothetical tsunami in Puerto Vallarta. The hazard zones by flood were the north of Puerto Vallarta, as Ameca, El Salado, El Pitillal and Camarones. The initial wave height could be ≤ 1 m, 15 minutes after earthquake, in Pitillal zone. We estimated for Puerto Vallarta the maximum flood area was in El Salado zone, ≤ 2 km, with the maximum wave height > 3 m to ≤ 4.8 m at 25 and 75 minutes. We estimated a previous vulnerability evaluation by local tsunami and flood; it was based on the spatial distribution of socio-economic data from INEGI. We estimated a low vulnerability in El Salado and height vulnerability for El Pitillal and Ameca.

  16. Attenuation and source properties at the Coso Geothermal area, California

    USGS Publications Warehouse

    Hough, S.E.; Lees, J.M.; Monastero, F.

    1999-01-01

    We use a multiple-empirical Green's function method to determine source properties of small (M -0.4 to 1.3) earthquakes and P- and S-wave attenuation at the Coso Geothermal Field, California. Source properties of a previously identified set of clustered events from the Coso geothermal region are first analyzed using an empirical Green's function (EGF) method. Stress-drop values of at least 0.5-1 MPa are inferred for all of the events; in many cases, the corner frequency is outside the usable bandwidth, and the stress drop can only be constrained as being higher than 3 MPa. P- and S-wave stress-drop estimates are identical to the resolution limits of the data. These results are indistinguishable from numerous EGF studies of M 2-5 earthquakes, suggesting a similarity in rupture processes that extends to events that are both tiny and induced, providing further support for Byerlee's Law. Whole-path Q estimates for P and S waves are determined using the multiple-empirical Green's function (MEGF) method of Hough (1997), whereby spectra from clusters of colocated events at a given station are inverted for a single attenuation parameter, ??, with source parameters constrained from EGF analysis. The ?? estimates, which we infer to be resolved to within 0.01 sec or better, exhibit almost as much scatter as a function of hypocentral distance as do values from previous single-spectrum studies for which much higher uncertainties in individual ?? estimates are expected. The variability in ?? estimates determined here therefore suggests real lateral variability in Q structure. Although the ray-path coverage is too sparse to yield a complete three-dimensional attenuation tomographic image, we invert the inferred ?? value for three-dimensional structure using a damped least-squares method, and the results do reveal significant lateral variability in Q structure. The inferred attenuation variability corresponds to the heat-flow variations within the geothermal region. A central low-Q region corresponds well with the central high-heat flow region; additional detailed structure is also suggested.

  17. Python-Based Applications for Hydrogeological Modeling

    NASA Astrophysics Data System (ADS)

    Khambhammettu, P.

    2013-12-01

    Python is a general-purpose, high-level programming language whose design philosophy emphasizes code readability. Add-on packages supporting fast array computation (numpy), plotting (matplotlib), scientific /mathematical Functions (scipy), have resulted in a powerful ecosystem for scientists interested in exploratory data analysis, high-performance computing and data visualization. Three examples are provided to demonstrate the applicability of the Python environment in hydrogeological applications. Python programs were used to model an aquifer test and estimate aquifer parameters at a Superfund site. The aquifer test conducted at a Groundwater Circulation Well was modeled with the Python/FORTRAN-based TTIM Analytic Element Code. The aquifer parameters were estimated with PEST such that a good match was produced between the simulated and observed drawdowns. Python scripts were written to interface with PEST and visualize the results. A convolution-based approach was used to estimate source concentration histories based on observed concentrations at receptor locations. Unit Response Functions (URFs) that relate the receptor concentrations to a unit release at the source were derived with the ATRANS code. The impact of any releases at the source could then be estimated by convolving the source release history with the URFs. Python scripts were written to compute and visualize receptor concentrations for user-specified source histories. The framework provided a simple and elegant way to test various hypotheses about the site. A Python/FORTRAN-based program TYPECURVEGRID-Py was developed to compute and visualize groundwater elevations and drawdown through time in response to a regional uniform hydraulic gradient and the influence of pumping wells using either the Theis solution for a fully-confined aquifer or the Hantush-Jacob solution for a leaky confined aquifer. The program supports an arbitrary number of wells that can operate according to arbitrary schedules. The python wrapper invokes the underlying FORTRAN layer to compute transient groundwater elevations and processes this information to create time-series and 2D plots.

  18. GT0 Explosion Sources for IMS Infrasound Calibration: Charge Design and Yield Estimation from Near-source Observations

    NASA Astrophysics Data System (ADS)

    Gitterman, Y.; Hofstetter, R.

    2014-03-01

    Three large-scale on-surface explosions were conducted by the Geophysical Institute of Israel (GII) at the Sayarim Military Range, Negev desert, Israel: about 82 tons of strong high explosives in August 2009, and two explosions of about 10 and 100 tons of ANFO explosives in January 2011. It was a collaborative effort between Israel, CTBTO, USA and several European countries, with the main goal to provide fully controlled ground truth (GT0) infrasound sources, monitored by extensive observations, for calibration of International Monitoring System (IMS) infrasound stations in Europe, Middle East and Asia. In all shots, the explosives were assembled like a pyramid/hemisphere on dry desert alluvium, with a complicated explosion design, different from the ideal homogenous hemisphere used in similar experiments in the past. Strong boosters and an upward charge detonation scheme were applied to provide more energy radiated to the atmosphere. Under these conditions the evaluation of the actual explosion yield, an important source parameter, is crucial for the GT0 calibration experiment. Audio-visual, air-shock and acoustic records were utilized for interpretation of observed unique blast effects, and for determination of blast wave parameters suited for yield estimation and the associated relationships. High-pressure gauges were deployed at 100-600 m to record air-blast properties, evaluate the efficiency of the charge design and energy generation, and provide a reliable estimation of the charge yield. The yield estimators, based on empirical scaled relations for well-known basic air-blast parameters—the peak pressure, impulse and positive phase duration, as well as on the crater dimensions and seismic magnitudes, were analyzed. A novel empirical scaled relationship for the little-known secondary shock delay was developed, consistent for broad ranges of ANFO charges and distances, which facilitates using this stable and reliable air-blast parameter as a new potential yield estimator. The delay data of the 2009 shot with IMI explosives, characterized by much higher detonation velocity, are clearly separated from ANFO data, thus indicating a dependence on explosive type. This unique dual Sayarim explosion experiment (August 2009/January 2011), with the strongest GT0 sources since the establishment of the IMS network, clearly demonstrated the most favorable westward/eastward infrasound propagation up to 3,400/6,250 km according to appropriate summer/winter weather pattern and stratospheric wind directions, respectively, and thus verified empirically common models of infrasound propagation in the atmosphere.

  19. A new methodology based on sensitivity analysis to simplify the recalibration of functional-structural plant models in new conditions.

    PubMed

    Mathieu, Amélie; Vidal, Tiphaine; Jullien, Alexandra; Wu, QiongLi; Chambon, Camille; Bayol, Benoit; Cournède, Paul-Henry

    2018-06-19

    Functional-structural plant models (FSPMs) describe explicitly the interactions between plants and their environment at organ to plant scale. However, the high level of description of the structure or model mechanisms makes this type of model very complex and hard to calibrate. A two-step methodology to facilitate the calibration process is proposed here. First, a global sensitivity analysis method was applied to the calibration loss function. It provided first-order and total-order sensitivity indexes that allow parameters to be ranked by importance in order to select the most influential ones. Second, the Akaike information criterion (AIC) was used to quantify the model's quality of fit after calibration with different combinations of selected parameters. The model with the lowest AIC gives the best combination of parameters to select. This methodology was validated by calibrating the model on an independent data set (same cultivar, another year) with the parameters selected in the second step. All the parameters were set to their nominal value; only the most influential ones were re-estimated. Sensitivity analysis applied to the calibration loss function is a relevant method to underline the most significant parameters in the estimation process. For the studied winter oilseed rape model, 11 out of 26 estimated parameters were selected. Then, the model could be recalibrated for a different data set by re-estimating only three parameters selected with the model selection method. Fitting only a small number of parameters dramatically increases the efficiency of recalibration, increases the robustness of the model and helps identify the principal sources of variation in varying environmental conditions. This innovative method still needs to be more widely validated but already gives interesting avenues to improve the calibration of FSPMs.

  20. A computational model for biosonar echoes from foliage

    PubMed Central

    Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals’ sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats. PMID:28817631

  1. A computational model for biosonar echoes from foliage.

    PubMed

    Ming, Chen; Gupta, Anupam Kumar; Lu, Ruijin; Zhu, Hongxiao; Müller, Rolf

    2017-01-01

    Since many bat species thrive in densely vegetated habitats, echoes from foliage are likely to be of prime importance to the animals' sensory ecology, be it as clutter that masks prey echoes or as sources of information about the environment. To better understand the characteristics of foliage echoes, a new model for the process that generates these signals has been developed. This model takes leaf size and orientation into account by representing the leaves as circular disks of varying diameter. The two added leaf parameters are of potential importance to the sensory ecology of bats, e.g., with respect to landmark recognition and flight guidance along vegetation contours. The full model is specified by a total of three parameters: leaf density, average leaf size, and average leaf orientation. It assumes that all leaf parameters are independently and identically distributed. Leaf positions were drawn from a uniform probability density function, sizes and orientations each from a Gaussian probability function. The model was found to reproduce the first-order amplitude statistics of measured example echoes and showed time-variant echo properties that depended on foliage parameters. Parameter estimation experiments using lasso regression have demonstrated that a single foliage parameter can be estimated with high accuracy if the other two parameters are known a priori. If only one parameter is known a priori, the other two can still be estimated, but with a reduced accuracy. Lasso regression did not support simultaneous estimation of all three parameters. Nevertheless, these results demonstrate that foliage echoes contain accessible information on foliage type and orientation that could play a role in supporting sensory tasks such as landmark identification and contour following in echolocating bats.

  2. Multiple Source DF (Direction Finding) Signal Processing: An Experimental System,

    DTIC Science & Technology

    The MUltiple SIgnal Characterization ( MUSIC ) algorithm is an implementation of the Signal Subspace Approach to provide parameter estimates of...the signal subspace (obtained from the received data) and the array manifold (obtained via array calibration). The MUSIC algorithm has been

  3. Macroeconomic Activity Module - NEMS Documentation

    EIA Publications

    2016-01-01

    Documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Macroeconomic Activity Module (MAM) used to develop the Annual Energy Outlook for 2016 (AEO2016). The report catalogues and describes the module assumptions, computations, methodology, parameter estimation techniques, and mainframe source code

  4. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  5. Bayesian Source Attribution of Salmonellosis in South Australia.

    PubMed

    Glass, K; Fearnley, E; Hocking, H; Raupach, J; Veitch, M; Ford, L; Kirk, M D

    2016-03-01

    Salmonellosis is a significant cause of foodborne gastroenteritis in Australia, and rates of illness have increased over recent years. We adopt a Bayesian source attribution model to estimate the contribution of different animal reservoirs to illness due to Salmonella spp. in South Australia between 2000 and 2010, together with 95% credible intervals (CrI). We excluded known travel associated cases and those of rare subtypes (fewer than 20 human cases or fewer than 10 isolates from included sources over the 11-year period), and the remaining 76% of cases were classified as sporadic or outbreak associated. Source-related parameters were included to allow for different handling and consumption practices. We attributed 35% (95% CrI: 20-49) of sporadic cases to chicken meat and 37% (95% CrI: 23-53) of sporadic cases to eggs. Of outbreak-related cases, 33% (95% CrI: 20-62) were attributed to chicken meat and 59% (95% CrI: 29-75) to eggs. A comparison of alternative model assumptions indicated that biases due to possible clustering of samples from sources had relatively minor effects on these estimates. Analysis of source-related parameters showed higher risk of illness from contaminated eggs than from contaminated chicken meat, suggesting that consumption and handling practices potentially play a bigger role in illness due to eggs, considering low Salmonella prevalence on eggs. Our results strengthen the evidence that eggs and chicken meat are important vehicles for salmonellosis in South Australia. © 2015 Society for Risk Analysis.

  6. Dispersion of a Passive Scalar Fluctuating Plume in a Turbulent Boundary Layer. Part I: Velocity and Concentration Measurements

    NASA Astrophysics Data System (ADS)

    Nironi, Chiara; Salizzoni, Pietro; Marro, Massimo; Mejean, Patrick; Grosjean, Nathalie; Soulhac, Lionel

    2015-09-01

    The prediction of the probability density function (PDF) of a pollutant concentration within atmospheric flows is of primary importance in estimating the hazard related to accidental releases of toxic or flammable substances and their effects on human health. This need motivates studies devoted to the characterization of concentration statistics of pollutants dispersion in the lower atmosphere, and their dependence on the parameters controlling their emissions. As is known from previous experimental results, concentration fluctuations are significantly influenced by the diameter of the source and its elevation. In this study, we aim to further investigate the dependence of the dispersion process on the source configuration, including source size, elevation and emission velocity. To that end we study experimentally the influence of these parameters on the statistics of the concentration of a passive scalar, measured at several distances downwind of the source. We analyze the spatial distribution of the first four moments of the concentration PDFs, with a focus on the variance, its dissipation and production and its spectral density. The information provided by the dataset, completed by estimates of the intermittency factors, allow us to discuss the role of the main mechanisms controlling the scalar dispersion and their link to the form of the PDF. The latter is shown to be very well approximated by a Gamma distribution, irrespective of the emission conditions and the distance from the source. Concentration measurements are complemented by a detailed description of the velocity statistics, including direct estimates of the Eulerian integral length scales from two-point correlations, a measurement that has been rarely presented to date.

  7. On the relation of earthquake stress drop and ground motion variability

    NASA Astrophysics Data System (ADS)

    Oth, Adrien; Miyake, Hiroe; Bindi, Dino

    2017-07-01

    One of the key parameters for earthquake source physics is stress drop since it can be directly linked to the spectral level of ground motion. Stress drop estimates from moment corner frequency analysis have been shown to be extremely variable, and this to a much larger degree than expected from the between-event ground motion variability. This discrepancy raises the question whether classically determined stress drop variability is too large, which would have significant consequences for seismic hazard analysis. We use a large high-quality data set from Japan with well-studied stress drop data to address this issue. Nonparametric and parametric reference ground motion models are derived, and the relation of between-event residuals for Japan Meteorological Agency equivalent seismic intensity and peak ground acceleration with stress drop is analyzed for crustal earthquakes. We find a clear correlation of the between-event residuals with stress drops estimates; however, while the island of Kyushu is characterized by substantially larger stress drops than Honshu, the between-event residuals do not reflect this observation, leading to the appearance of two event families with different stress drop levels yet similar range of between-event residuals. Both the within-family and between-family stress drop variations are larger than expected from the ground motion between-event variability. A systematic common analysis of these parameters holds the potential to provide important constraints on the relative robustness of different groups of data in the different parameter spaces and to improve our understanding on how much of the observed source parameter variability is likely to be true source physics variability.

  8. Computation of Standard Errors

    PubMed Central

    Dowd, Bryan E; Greene, William H; Norton, Edward C

    2014-01-01

    Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304

  9. Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.

    PubMed

    Hussain, S A; Perrier, M; Tartakovsky, B

    2018-04-01

    Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.

  10. Optimisation of dispersion parameters of Gaussian plume model for CO₂ dispersion.

    PubMed

    Liu, Xiong; Godbole, Ajit; Lu, Cheng; Michal, Guillaume; Venton, Philip

    2015-11-01

    The carbon capture and storage (CCS) and enhanced oil recovery (EOR) projects entail the possibility of accidental release of carbon dioxide (CO2) into the atmosphere. To quantify the spread of CO2 following such release, the 'Gaussian' dispersion model is often used to estimate the resulting CO2 concentration levels in the surroundings. The Gaussian model enables quick estimates of the concentration levels. However, the traditionally recommended values of the 'dispersion parameters' in the Gaussian model may not be directly applicable to CO2 dispersion. This paper presents an optimisation technique to obtain the dispersion parameters in order to achieve a quick estimation of CO2 concentration levels in the atmosphere following CO2 blowouts. The optimised dispersion parameters enable the Gaussian model to produce quick estimates of CO2 concentration levels, precluding the necessity to set up and run much more complicated models. Computational fluid dynamics (CFD) models were employed to produce reference CO2 dispersion profiles in various atmospheric stability classes (ASC), different 'source strengths' and degrees of ground roughness. The performance of the CFD models was validated against the 'Kit Fox' field measurements, involving dispersion over a flat horizontal terrain, both with low and high roughness regions. An optimisation model employing a genetic algorithm (GA) to determine the best dispersion parameters in the Gaussian plume model was set up. Optimum values of the dispersion parameters for different ASCs that can be used in the Gaussian plume model for predicting CO2 dispersion were obtained.

  11. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND

    PubMed Central

    Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159

  12. A derating method for therapeutic applications of high intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-05-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  13. A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.

    PubMed

    Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A

    2010-01-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  14. Aquifer thermal-energy-storage costs with a seasonal-chill source

    NASA Astrophysics Data System (ADS)

    Brown, D. R.

    1983-01-01

    The cost of energy supplied by an aquifer thermal energy storage (ATES) ystem from a seasonal chill source was investigated. Costs were estimated for point demand and residential development ATES systems using the computer code AQUASTOR. AQUASTOR was developed at PNL specifically for the economic analysis of ATES systems. In this analysis the cost effect of varying a wide range of technical and economic parameters was examined. Those parameters exhibiting a substantial influence on the costs of ATES delivered chill were: system size; well flow rate; transmission distance; source temperature; well depth; and cost of capital. The effects of each parameter are discussed. Two primary constraints of ATES chill systems are the extremely low energy density of the storage fluid and the prohibitive costs of lengthly pipelines for delivering chill to residential users. This economic analysis concludes that ATES-delivered chill will not be competitive for residential cooling applications. The otherwise marginal attractiveness of ATES chill systems vanishes under the extremely low load factors characteristic of residential cooling systems. (LCL)

  15. Data integration for inference about spatial processes: A model-based approach to test and account for data inconsistency

    PubMed Central

    Pedrini, Paolo; Bragalanti, Natalia; Groff, Claudio

    2017-01-01

    Recently-developed methods that integrate multiple data sources arising from the same ecological processes have typically utilized structured data from well-defined sampling protocols (e.g., capture-recapture and telemetry). Despite this new methodological focus, the value of opportunistic data for improving inference about spatial ecological processes is unclear and, perhaps more importantly, no procedures are available to formally test whether parameter estimates are consistent across data sources and whether they are suitable for integration. Using data collected on the reintroduced brown bear population in the Italian Alps, a population of conservation importance, we combined data from three sources: traditional spatial capture-recapture data, telemetry data, and opportunistic data. We developed a fully integrated spatial capture-recapture (SCR) model that included a model-based test for data consistency to first compare model estimates using different combinations of data, and then, by acknowledging data-type differences, evaluate parameter consistency. We demonstrate that opportunistic data lend itself naturally to integration within the SCR framework and highlight the value of opportunistic data for improving inference about space use and population size. This is particularly relevant in studies of rare or elusive species, where the number of spatial encounters is usually small and where additional observations are of high value. In addition, our results highlight the importance of testing and accounting for inconsistencies in spatial information from structured and unstructured data so as to avoid the risk of spurious or averaged estimates of space use and consequently, of population size. Our work supports the use of a single modeling framework to combine spatially-referenced data while also accounting for parameter consistency. PMID:28973034

  16. Estimating lithospheric properties at Atla Regio, Venus

    NASA Technical Reports Server (NTRS)

    Phillips, Roger J.

    1994-01-01

    Magellan spehrical harmonic gravity and topography models are used to estimate lithospheric properties at Alta Regio, Venus, a proposed hotspot with dynamic support from mantle plume(s). Global spherical harmonic and local representations of the gravity field share common properties in the Atla region interms of their spectral behavior over a wavelength band from approximately 2100 to approximately 700 km. The estimated free-air admittance spectrum displays a rather featureless long-wavelength portion followed by a sharp rise at wavelengths shorter than about 1000 km. This sharp rise requires significant flexural support of short-wavelength structures. The Bouguer coherence also displays a sharp drop in this wavelength band, indicating a finite flexural rigidity of the lithosphere. A simple model for lithospheric loading from above and below is introduced (D. W. Forsyth, 1985) with four parameters: f, the ratio of bottom loading to top loading; z(sub m), crustal thickness; z(sub l) depth to bottom loading source; and T(sub e) elastic lithosphere thickness. A dual-mode compensation model is introduced in which the shorter wavelengths (lambda approximately less than 1000 km) might be explained best by a predominance of top loading by the large shield volcanoes Maat Mons, Ozza Mons, and Sapas Mons, and the longer wavelengths (lambda approximately greater than 1500 km) might be explained best by a deep depth of compensation, possibly representing bottom loading by a dynamic source. A Monte Carlo inversion technique is introduced to thoroughly search out the four-space of the model parameters and to examine parameter correlation in the solutions. Venus either is a considerabe deficient in heat sources relative to Earth, or the thermal lithosphere is overthickened in response to an earlier episode of significant heat loss from the planet.

  17. Comparison of different Kalman filter approaches in deriving time varying connectivity from EEG data.

    PubMed

    Ghumare, Eshwar; Schrooten, Maarten; Vandenberghe, Rik; Dupont, Patrick

    2015-08-01

    Kalman filter approaches are widely applied to derive time varying effective connectivity from electroencephalographic (EEG) data. For multi-trial data, a classical Kalman filter (CKF) designed for the estimation of single trial data, can be implemented by trial-averaging the data or by averaging single trial estimates. A general linear Kalman filter (GLKF) provides an extension for multi-trial data. In this work, we studied the performance of the different Kalman filtering approaches for different values of signal-to-noise ratio (SNR), number of trials and number of EEG channels. We used a simulated model from which we calculated scalp recordings. From these recordings, we estimated cortical sources. Multivariate autoregressive model parameters and partial directed coherence was calculated for these estimated sources and compared with the ground-truth. The results showed an overall superior performance of GLKF except for low levels of SNR and number of trials.

  18. Study of the uncertainty in estimation of the exposure of non-human biota to ionising radiation.

    PubMed

    Avila, R; Beresford, N A; Agüero, A; Broed, R; Brown, J; Iospje, M; Robles, B; Suañez, A

    2004-12-01

    Uncertainty in estimations of the exposure of non-human biota to ionising radiation may arise from a number of sources including values of the model parameters, empirical data, measurement errors and biases in the sampling. The significance of the overall uncertainty of an exposure assessment will depend on how the estimated dose compares with reference doses used for risk characterisation. In this paper, we present the results of a study of the uncertainty in estimation of the exposure of non-human biota using some of the models and parameters recommended in the FASSET methodology. The study was carried out for semi-natural terrestrial, agricultural and marine ecosystems, and for four radionuclides (137Cs, 239Pu, 129I and 237Np). The parameters of the radionuclide transfer models showed the highest sensitivity and contributed the most to the uncertainty in the predictions of doses to biota. The most important ones were related to the bioavailability and mobility of radionuclides in the environment, for example soil-to-plant transfer factors, the bioaccumulation factors for marine biota and the gut uptake fraction for terrestrial mammals. In contrast, the dose conversion coefficients showed low sensitivity and contributed little to the overall uncertainty. Radiobiological effectiveness contributed to the overall uncertainty of the dose estimations for alpha emitters although to a lesser degree than a number of transfer model parameters.

  19. Quantifying the uncertainties of China's emission inventory for industrial sources: From national to provincial and city scales

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhou, Yaduan; Qiu, Liping; Zhang, Jie

    2017-09-01

    A comprehensive uncertainty analysis was conducted on emission inventories for industrial sources at national (China), provincial (Jiangsu), and city (Nanjing) scales for 2012. Based on various methods and data sources, Monte-Carlo simulation was applied at sector level for national inventory, and at plant level (whenever possible) for provincial and city inventories. The uncertainties of national inventory were estimated at -17-37% (expressed as 95% confidence intervals, CIs), -21-35%, -19-34%, -29-40%, -22-47%, -21-54%, -33-84%, and -32-92% for SO2, NOX, CO, TSP (total suspended particles), PM10, PM2.5, black carbon (BC), and organic carbon (OC) emissions respectively for the whole country. At provincial and city levels, the uncertainties of corresponding pollutant emissions were estimated at -15-18%, -18-33%, -16-37%, -20-30%, -23-45%, -26-50%, -33-79%, and -33-71% for Jiangsu, and -17-22%, -10-33%, -23-75%, -19-36%, -23-41%, -28-48%, -45-82%, and -34-96% for Nanjing, respectively. Emission factors (or associated parameters) were identified as the biggest contributors to the uncertainties of emissions for most source categories except iron & steel production in the national inventory. Compared to national one, uncertainties of total emissions in the provincial and city-scale inventories were not significantly reduced for most species with an exception of SO2. For power and other industrial boilers, the uncertainties were reduced, and the plant-specific parameters played more important roles to the uncertainties. Much larger PM10 and PM2.5 emissions for Jiangsu were estimated in this provincial inventory than other studies, implying the big discrepancies on data sources of emission factors and activity data between local and national inventories. Although the uncertainty analysis of bottom-up emission inventories at national and local scales partly supported the ;top-down; estimates using observation and/or chemistry transport models, detailed investigations and field measurements were recommended for further improving the emission estimates and reducing the uncertainty of inventories at local and regional scales, for both industrial and other sectors.

  20. Approach to identifying pollutant source and matching flow field

    NASA Astrophysics Data System (ADS)

    Liping, Pang; Yu, Zhang; Hongquan, Qu; Tao, Hu; Wei, Wang

    2013-07-01

    Accidental pollution events often threaten people's health and lives, and it is necessary to identify a pollutant source rapidly so that prompt actions can be taken to prevent the spread of pollution. But this identification process is one of the difficulties in the inverse problem areas. This paper carries out some studies on this issue. An approach using single sensor information with noise was developed to identify a sudden continuous emission trace pollutant source in a steady velocity field. This approach first compares the characteristic distance of the measured concentration sequence to the multiple hypothetical measured concentration sequences at the sensor position, which are obtained based on a source-three-parameter multiple hypotheses. Then we realize the source identification by globally searching the optimal values with the objective function of the maximum location probability. Considering the large amount of computation load resulting from this global searching, a local fine-mesh source search method based on priori coarse-mesh location probabilities is further used to improve the efficiency of identification. Studies have shown that the flow field has a very important influence on the source identification. Therefore, we also discuss the impact of non-matching flow fields with estimation deviation on identification. Based on this analysis, a method for matching accurate flow field is presented to improve the accuracy of identification. In order to verify the practical application of the above method, an experimental system simulating a sudden pollution process in a steady flow field was set up and some experiments were conducted when the diffusion coefficient was known. The studies showed that the three parameters (position, emission strength and initial emission time) of the pollutant source in the experiment can be estimated by using the method for matching flow field and source identification.

  1. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations

    PubMed Central

    Simon, Aaron B.; Dubowitz, David J.; Blockley, Nicholas P.; Buxton, Richard B.

    2016-01-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2′ as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2′, we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2′-based estimate of the metabolic response to CO2 of 1.4%, and R2′- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2′-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. PMID:26790354

  2. A novel Bayesian approach to accounting for uncertainty in fMRI-derived estimates of cerebral oxygen metabolism fluctuations.

    PubMed

    Simon, Aaron B; Dubowitz, David J; Blockley, Nicholas P; Buxton, Richard B

    2016-04-01

    Calibrated blood oxygenation level dependent (BOLD) imaging is a multimodal functional MRI technique designed to estimate changes in cerebral oxygen metabolism from measured changes in cerebral blood flow and the BOLD signal. This technique addresses fundamental ambiguities associated with quantitative BOLD signal analysis; however, its dependence on biophysical modeling creates uncertainty in the resulting oxygen metabolism estimates. In this work, we developed a Bayesian approach to estimating the oxygen metabolism response to a neural stimulus and used it to examine the uncertainty that arises in calibrated BOLD estimation due to the presence of unmeasured model parameters. We applied our approach to estimate the CMRO2 response to a visual task using the traditional hypercapnia calibration experiment as well as to estimate the metabolic response to both a visual task and hypercapnia using the measurement of baseline apparent R2' as a calibration technique. Further, in order to examine the effects of cerebral spinal fluid (CSF) signal contamination on the measurement of apparent R2', we examined the effects of measuring this parameter with and without CSF-nulling. We found that the two calibration techniques provided consistent estimates of the metabolic response on average, with a median R2'-based estimate of the metabolic response to CO2 of 1.4%, and R2'- and hypercapnia-calibrated estimates of the visual response of 27% and 24%, respectively. However, these estimates were sensitive to different sources of estimation uncertainty. The R2'-calibrated estimate was highly sensitive to CSF contamination and to uncertainty in unmeasured model parameters describing flow-volume coupling, capillary bed characteristics, and the iso-susceptibility saturation of blood. The hypercapnia-calibrated estimate was relatively insensitive to these parameters but highly sensitive to the assumed metabolic response to CO2. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  4. Update on developments at SNIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacks, J., E-mail: jamie.zacks@ccfe.ac.uk; Turner, I.; Day, I.

    The Small Negative Ion Facility (SNIF) at CCFE has been undergoing continuous development and enhancement to both improve operational reliability and increase diagnostic capability. SNIF uses a CW 13.56MHz, 5kW RF driven volume source with a 30kV triode accelerator. Improvement and characterisation work includes: Installation of a new “L” type RF matching unit, used to calculate the load on the RF generator. Use of the electron suppressing biased insert as a Langmuir probe under different beam extraction conditions. Measurement of the hydrogen Fulcher molecular spectrum, used to calculate gas temperature in the source. Beam optimisation through parameter scans, using coppermore » target plate and visible cameras, with results compared with AXCEL-INP to provide beam current estimate. Modelling of the beam power density profile on the target plate using ANSYS to estimate beam power and provide another estimate of beam current. This work is described, and has allowed an estimation of the extracted beam current of approximately 6mA (4mA/cm2) at 3.5kW RF power and a source pressure of 0.6Pa.« less

  5. A Bayesian approach to multi-messenger astronomy: identification of gravitational-wave host galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, XiLong; Messenger, Christopher; Heng, Ik Siong

    We present a general framework for incorporating astrophysical information into Bayesian parameter estimation techniques used by gravitational wave data analysis to facilitate multi-messenger astronomy. Since the progenitors of transient gravitational wave events, such as compact binary coalescences, are likely to be associated with a host galaxy, improvements to the source sky location estimates through the use of host galaxy information are explored. To demonstrate how host galaxy properties can be included, we simulate a population of compact binary coalescences and show that for ∼8.5% of simulations within 200 Mpc, the top 10 most likely galaxies account for a ∼50% ofmore » the total probability of hosting a gravitational wave source. The true gravitational wave source host galaxy is in the top 10 galaxy candidates ∼10% of the time. Furthermore, we show that by including host galaxy information, a better estimate of the inclination angle of a compact binary gravitational wave source can be obtained. We also demonstrate the flexibility of our method by incorporating the use of either the B or K band into our analysis.« less

  6. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises.

    PubMed

    Cannavò, Flavio; Camacho, Antonio G; González, Pablo J; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-06-09

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes.

  7. Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises

    PubMed Central

    Cannavò, Flavio; Camacho, Antonio G.; González, Pablo J.; Mattia, Mario; Puglisi, Giuseppe; Fernández, José

    2015-01-01

    Volcano observatories provide near real-time information and, ultimately, forecasts about volcano activity. For this reason, multiple physical and chemical parameters are continuously monitored. Here, we present a new method to efficiently estimate the location and evolution of magmatic sources based on a stream of real-time surface deformation data, such as High-Rate GPS, and a free-geometry magmatic source model. The tool allows tracking inflation and deflation sources in time, providing estimates of where a volcano might erupt, which is important in understanding an on-going crisis. We show a successful simulated application to the pre-eruptive period of May 2008, at Mount Etna (Italy). The proposed methodology is able to track the fast dynamics of the magma migration by inverting the real-time data within seconds. This general method is suitable for integration in any volcano observatory. The method provides first order unsupervised and realistic estimates of the locations of magmatic sources and of potential eruption sites, information that is especially important for civil protection purposes. PMID:26055494

  8. Fundamental properties of Fanaroff-Riley type II radio galaxies investigated via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kapińska, A. D.; Uttley, P.; Kaiser, C. R.

    2012-08-01

    Radio galaxies and quasars are among the largest and most powerful single objects known and are believed to have had a significant impact on the evolving Universe and its large-scale structure. We explore the intrinsic and extrinsic properties of the population of Fanaroff-Riley type II (FR II) objects, i.e. their kinetic luminosities, lifetimes and the central densities of their environments. In particular, the radio and kinetic luminosity functions of these powerful radio sources are investigated using the complete, flux-limited radio catalogues of the Third Cambridge Revised Revised Catalogue (3CRR) and Best et al. We construct multidimensional Monte Carlo simulations using semi-analytical models of FR II source time evolution to create artificial samples of radio galaxies. Unlike previous studies, we compare radio luminosity functions found with both the observed and simulated data to explore the best-fitting fundamental source parameters. The new Monte Carlo method we present here allows us to (i) set better limits on the predicted fundamental parameters of which confidence intervals estimated over broad ranges are presented and (ii) generate the most plausible underlying parent populations of these radio sources. Moreover, as has not been done before, we allow the source physical properties (kinetic luminosities, lifetimes and central densities) to co-evolve with redshift, and we find that all the investigated parameters most likely undergo cosmological evolution. Strikingly, we find that the break in the kinetic luminosity function must undergo redshift evolution of at least (1 + z)3. The fundamental parameters are strongly degenerate, and independent constraints are necessary to draw more precise conclusions. We use the estimated kinetic luminosity functions to set constraints on the duty cycles of these powerful radio sources. A comparison of the duty cycles of powerful FR IIs with those determined from radiative luminosities of active galactic nuclei of comparable black hole mass suggests a transition in behaviour from high to low redshifts, corresponding to either a drop in the typical black hole mass of powerful FR IIs at low redshifts, or a transition to a kinetically dominated, radiatively inefficient FR II population.

  9. A distributed transmit beamforming synchronization strategy for multi-element radar systems

    NASA Astrophysics Data System (ADS)

    Xiao, Manlin; Li, Xingwen; Xu, Jikang

    2017-02-01

    The distributed transmit beamforming has recently been discussed as an energy-effective technique in wireless communication systems. A common ground of various techniques is that the destination node transmits a beacon signal or feedback to assist source nodes to synchronize signals. However, this approach is not appropriate for a radar system since the destination is a non-cooperative target of an unknown location. In our paper, we propose a novel synchronization strategy for a distributed multiple-element beamfoming radar system. Source nodes estimate parameters of beacon signals transmitted from others to get their local synchronization information. The channel information of the phase propagation delay is transmitted to nodes via the reflected beacon signals as well. Next, each node generates appropriate parameters to form a beamforming signal at the target. Transmit beamforming signals of all nodes will combine coherently at the target compensating for different propagation delay. We analyse the influence of the local oscillation accuracy and the parameter estimation errors on the performance of the proposed synchronization scheme. The results of numerical simulations illustrate that this synchronization scheme is effective to enable the transmit beamforming in a distributed multi-element radar system.

  10. Commercial Demand Module - NEMS Documentation

    EIA Publications

    2017-01-01

    Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.

  11. A stepwise, multi-objective, multi-variable parameter optimization method for the APEX model

    USDA-ARS?s Scientific Manuscript database

    Proper parameterization enables hydrological models to make reliable estimates of non-point source pollution for effective control measures. The automatic calibration of hydrologic models requires significant computational power limiting its application. The study objective was to develop and eval...

  12. Identifiability and estimation of multiple transmission pathways in cholera and waterborne disease.

    PubMed

    Eisenberg, Marisa C; Robertson, Suzanne L; Tien, Joseph H

    2013-05-07

    Cholera and many waterborne diseases exhibit multiple characteristic timescales or pathways of infection, which can be modeled as direct and indirect transmission. A major public health issue for waterborne diseases involves understanding the modes of transmission in order to improve control and prevention strategies. An important epidemiological question is: given data for an outbreak, can we determine the role and relative importance of direct vs. environmental/waterborne routes of transmission? We examine whether parameters for a differential equation model of waterborne disease transmission dynamics can be identified, both in the ideal setting of noise-free data (structural identifiability) and in the more realistic setting in the presence of noise (practical identifiability). We used a differential algebra approach together with several numerical approaches, with a particular emphasis on identifiability of the transmission rates. To examine these issues in a practical public health context, we apply the model to a recent cholera outbreak in Angola (2006). Our results show that the model parameters-including both water and person-to-person transmission routes-are globally structurally identifiable, although they become unidentifiable when the environmental transmission timescale is fast. Even for water dynamics within the identifiable range, when noisy data are considered, only a combination of the water transmission parameters can practically be estimated. This makes the waterborne transmission parameters difficult to estimate, leading to inaccurate estimates of important epidemiological parameters such as the basic reproduction number (R0). However, measurements of pathogen persistence time in environmental water sources or measurements of pathogen concentration in the water can improve model identifiability and allow for more accurate estimation of waterborne transmission pathway parameters as well as R0. Parameter estimates for the Angola outbreak suggest that both transmission pathways are needed to explain the observed cholera dynamics. These results highlight the importance of incorporating environmental data when examining waterborne disease. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Assessment of spatial distrilbution of porosity and aquifer geohydraulic parameters in parts of the Tertiary - Quaternary hydrogeoresource of south-eastern Nigeria

    NASA Astrophysics Data System (ADS)

    George, N. J.; Akpan, A. E.; Akpan, F. S.

    2017-12-01

    An integrated attempt exploring information deduced from extensive surface resistivity study in three Local Government Areas of Akwa Ibom State, Nigeria and data from hydrogeological sources obtained from water boreholes have been explored to economically estimate porosity and coefficient of permeability/hydraulic conductivity in parts of the clastic Tertiary - Quaternary sediments of the Niger Delta region. Generally, these parameters are predominantly estimated from empirical analysis of core samples and pumping test data generated from boreholes in the laboratory. However, this analysis is not only costly and time consuming, but also limited in areal coverage. The chosen technique employs surface resistivity data, core samples and pumping test data in order to estimate porosity and aquifer hydraulic parameters (transverse resistance, hydraulic conductivity and transmissivity). In correlating the two sets of results, Porosity and hydraulic conductivity were observed to be more elevated near the riverbanks. Empirical models utilising Archie's, Waxman-Smits and Kozeny-Carman Bear relations were employed characterising the formation parameters with wonderfully deduced good fits. The effect of surface conduction occasioned by clay usually disregarded or ignored in Archie's model was estimated to be 2.58 × 10-5 Siemens. This conductance can be used as a corrective factor to the conduction values obtained from Archie's equation. Interpretation aided measures such as graphs, mathematical models and maps which geared towards realistic conclusions and interrelationship between the porosity and other aquifer parameters were generated. The values of the hydraulic conductivity estimated from Waxman-Smits model was approximately 9.6 × 10-5m/s everywhere. This revelation indicates that there is no pronounced change in the quality of the saturating fluid and the geological formations that serve as aquifers even though the porosities were varying. The deciphered parameter relations can be used to estimate geohydraulic parameters in other locations with little or no borehole data.

  14. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  15. Open-source LCA tool for estimating greenhouse gas emissions from crude oil production using field characteristics.

    PubMed

    El-Houjeiri, Hassan M; Brandt, Adam R; Duffy, James E

    2013-06-04

    Existing transportation fuel cycle emissions models are either general and calculate nonspecific values of greenhouse gas (GHG) emissions from crude oil production, or are not available for public review and auditing. We have developed the Oil Production Greenhouse Gas Emissions Estimator (OPGEE) to provide open-source, transparent, rigorous GHG assessments for use in scientific assessment, regulatory processes, and analysis of GHG mitigation options by producers. OPGEE uses petroleum engineering fundamentals to model emissions from oil and gas production operations. We introduce OPGEE and explain the methods and assumptions used in its construction. We run OPGEE on a small set of fictional oil fields and explore model sensitivity to selected input parameters. Results show that upstream emissions from petroleum production operations can vary from 3 gCO2/MJ to over 30 gCO2/MJ using realistic ranges of input parameters. Significant drivers of emissions variation are steam injection rates, water handling requirements, and rates of flaring of associated gas.

  16. Tuning into Scorpius X-1: adapting a continuous gravitational-wave search for a known binary system

    NASA Astrophysics Data System (ADS)

    Meadors, Grant David; Goetz, Evan; Riles, Keith

    2016-05-01

    We describe how the TwoSpect data analysis method for continuous gravitational waves (GWs) has been tuned for directed sources such as the low-mass X-ray binary (LMXB), Scorpius X-1 (Sco X-1). A comparison of five search algorithms generated simulations of the orbital and GW parameters of Sco X-1. Whereas that comparison focused on relative performance, here the simulations help quantify the sensitivity enhancement and parameter estimation abilities of this directed method, derived from an all-sky search for unknown sources, using doubly Fourier-transformed data. Sensitivity is shown to be enhanced when the source sky location and period are known, because we can run a fully templated search, bypassing the all-sky hierarchical stage using an incoherent harmonic sum. The GW strain and frequency, as well as the projected semi-major axis of the binary system, are recovered and uncertainty estimated, for simulated signals that are detected. Upper limits for GW strain are set for undetected signals. Applications to future GW observatory data are discussed. Robust against spin-wandering and computationally tractable despite an unknown frequency, this directed search is an important new tool for finding gravitational signals from LMXBs.

  17. The Importance of Behavioral Thresholds and Objective Functions in Contaminant Transport Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Kang, M.; Thomson, N. R.

    2007-12-01

    The TCE release from The Lockformer Company in Lisle Illinois resulted in a plume in a confined aquifer that is more than 4 km long and impacted more than 300 residential wells. Many of the wells are on the fringe of the plume and have concentrations that did not exceed 5 ppb. The settlement for the Chapter 11 bankruptcy protection of Lockformer involved the establishment of a trust fund that compensates individuals with cancers with payments being based on cancer type, estimated TCE concentration in the well and the duration of exposure to TCE. The estimation of early arrival times and hence low likelihood events is critical in the determination of the eligibility of an individual for compensation. Thus, an emphasis must be placed on the accuracy of the leading tail region in the likelihood distribution of possible arrival times at a well. The estimation of TCE arrival time, using a three-dimensional analytical solution, involved parameter estimation and uncertainty analysis. Parameters in the model included TCE source parameters, groundwater velocities, dispersivities and the TCE decay coefficient for both the confining layer and the bedrock aquifer. Numerous objective functions, which include the well-known L2-estimator, robust estimators (L1-estimators and M-estimators), penalty functions, and dead zones, were incorporated in the parameter estimation process to treat insufficiencies in both the model and observational data due to errors, biases, and limitations. The concept of equifinality was adopted and multiple maximum likelihood parameter sets were accepted if pre-defined physical criteria were met. The criteria ensured that a valid solution predicted TCE concentrations for all TCE impacted areas. Monte Carlo samples are found to be inadequate for uncertainty analysis of this case study due to its inability to find parameter sets that meet the predefined physical criteria. Successful results are achieved using a Dynamically-Dimensioned Search sampling methodology that inherently accounts for parameter correlations and does not require assumptions regarding parameter distributions. For uncertainty analysis, multiple parameter sets were obtained using a modified Cauchy's M-estimator. Penalty functions had to be incorporated into the objective function definitions to generate a sufficient number of acceptable parameter sets. The combined effect of optimization and the application of the physical criteria perform the function of behavioral thresholds by reducing anomalies and by removing parameter sets with high objective function values. The factors that are important to the creation of an uncertainty envelope for TCE arrival at wells are outlined in the work. In general, greater uncertainty appears to be present at the tails of the distribution. For a refinement of the uncertainty envelopes, the application of additional physical criteria or behavioral thresholds is recommended.

  18. Post-blasting seismicity in Rudna copper mine, Poland - source parameters analysis.

    NASA Astrophysics Data System (ADS)

    Caputa, Alicja; Rudziński, Łukasz; Talaga, Adam

    2017-04-01

    The really important hazard in Polish copper mines is high seismicity and corresponding rockbursts. Many methods are used to reduce the seismic hazard. Among others the most effective is preventing blasting in potentially hazardous mining panels. The method is expected to provoke small moderate tremors (up to M2.0) and reduce in this way a stress accumulation in the rockmass. This work presents an analysis, which deals with post-blasting events in Rudna copper mine, Poland. Using the Full Moment Tensor (MT) inversion and seismic spectra analysis, we try to find some characteristic features of post blasting seismic sources. Source parameters estimated for post-blasting events are compared with the parameters of not-provoked mining events that occurred in the vicinity of the provoked sources. Our studies show that focal mechanisms of events which occurred after blasts have similar MT decompositions, namely are characterized by a quite strong isotropic component as compared with the isotropic component of not-provoked events. Also source parameters obtained from spectral analysis show that provoked seismicity has a specific source physics. Among others, it is visible from S to P wave energy ratio, which is higher for not-provoked events. The comparison of all our results reveals a three possible groups of sources: a) occurred just after blasts, b) occurred from 5min to 24h after blasts and c) not-provoked seismicity (more than 24h after blasting). Acknowledgements: This work was supported within statutory activities No3841/E-41/S/2016 of Ministry of Science and Higher Education of Poland.

  19. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  20. Estimation of viscoelastic parameters in Prony series from shear wave propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Jae-Wook; Hong, Jung-Wuk, E-mail: j.hong@kaist.ac.kr, E-mail: jwhong@alum.mit.edu; Lee, Hyoung-Ki

    2016-06-21

    When acquiring accurate ultrasonic images, we must precisely estimate the mechanical properties of the soft tissue. This study investigates and estimates the viscoelastic properties of the tissue by analyzing shear waves generated through an acoustic radiation force. The shear waves are sourced from a localized pushing force acting for a certain duration, and the generated waves travel horizontally. The wave velocities depend on the mechanical properties of the tissue such as the shear modulus and viscoelastic properties; therefore, we can inversely calculate the properties of the tissue through parametric studies.

  1. Biquaternion beamspace with its application to vector-sensor array direction findings and polarization estimations

    NASA Astrophysics Data System (ADS)

    Li, Dan; Xu, Feng; Jiang, Jing Fei; Zhang, Jian Qiu

    2017-12-01

    In this paper, a biquaternion beamspace, constructed by projecting the original data of an electromagnetic vector-sensor array into a subspace of a lower dimension via a quaternion transformation matrix, is first proposed. To estimate the direction and polarization angles of sources, biquaternion beamspace multiple signal classification (BB-MUSIC) estimators are then formulated. The analytical results show that the biquaternion beamspaces offer us some additional degrees of freedom to simultaneously achieve three goals. One is to save the memory spaces for storing the data covariance matrix and reduce the computation efforts of the eigen-decomposition. Another is to decouple the estimations of the sources' polarization parameters from those of their direction angles. The other is to blindly whiten the coherent noise of the six constituent antennas in each vector-sensor. It is also shown that the existing biquaternion multiple signal classification (BQ-MUSIC) estimator is a specific case of our BB-MUSIC ones. The simulation results verify the correctness and effectiveness of the analytical ones.

  2. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  3. Real-time realizations of the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Pinsky, V.; Arrowsmith, S.; Hofstetter, A.; Nippress, A.

    2015-12-01

    The Bayesian Infrasonic Source Localization method (BISL), introduced by Mordak et al. (2010) and upgraded by Marcillo et al. (2014) is destined for the accurate estimation of the atmospheric event origin at local, regional and global scales by the seismic and infrasonic networks and arrays. The BISL is based on probabilistic models of the source-station infrasonic signal propagation time, picking time and azimuth estimate merged with a prior knowledge about celerity distribution. It requires at each hypothetical source location, integration of the product of the corresponding source-station likelihood functions multiplied by a prior probability density function of celerity over the multivariate parameter space. The present BISL realization is generally time-consuming procedure based on numerical integration. The computational scheme proposed simplifies the target function so that integrals are taken exactly and are represented via standard functions. This makes the procedure much faster and realizable in real-time without practical loss of accuracy. The procedure executed as PYTHON-FORTRAN code demonstrates high performance on a set of the model and real data.

  4. Crustal dynamics project data analysis fixed station VLBI geodetic results

    NASA Technical Reports Server (NTRS)

    Ryan, J. W.; Ma, C.

    1985-01-01

    The Goddard VLBI group reports the results of analyzing the fixed observatory VLBI data available to the Crustal Dynamics Project through the end of 1984. All POLARIS/IRIS full-day data are included. The mobile site at Platteville, Colorado is also included since its occupation bears on the study of plate stability. Data from 1980 through 1984 were used to obtain the catalog of site and radio source positions labeled S284C. Using this catalog two types of one-day solutions were made: (1) to estimate site and baseline motions; and (2) to estimate Earth rotation parameters. A priori Earth rotation parameters were interpolated to the epoch of each observation from BIH Circular D.

  5. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    NASA Astrophysics Data System (ADS)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Jeff Wu, C. F.

    Calibration parameters in deterministic computer experiments are those attributes that cannot be measured or available in physical experiments. Here, an approach to estimate them by using data from physical experiments and computer simulations. A theoretical framework is given which allows us to study the issues of parameter identifiability and estimation. We define the L 2-consistency for calibration as a justification for calibration methods. It is shown that a simplified version of the original KO method leads to asymptotically L 2-inconsistent calibration. This L 2-inconsistency can be remedied by modifying the original estimation procedure. A novel calibration method, called the Lmore » 2 calibration, is proposed and proven to be L 2-consistent and enjoys optimal convergence rate. Furthermore a numerical example and some mathematical analysis are used to illustrate the source of the L 2-inconsistency problem.« less

  7. Assessing the differences in public health impact of salmonella subtypes using a bayesian microbial subtyping approach for source attribution.

    PubMed

    Pires, Sara M; Hald, Tine

    2010-02-01

    Salmonella is a major cause of human gastroenteritis worldwide. To prioritize interventions and assess the effectiveness of efforts to reduce illness, it is important to attribute salmonellosis to the responsible sources. Studies have suggested that some Salmonella subtypes have a higher health impact than others. Likewise, some food sources appear to have a higher impact than others. Knowledge of variability in the impact of subtypes and sources may provide valuable added information for research, risk management, and public health strategies. We developed a Bayesian model that attributes illness to specific sources and allows for a better estimation of the differences in the ability of Salmonella subtypes and food types to result in reported salmonellosis. The model accommodates data for multiple years and is based on the Danish Salmonella surveillance. The number of sporadic cases caused by different Salmonella subtypes is estimated as a function of the prevalence of these subtypes in the animal-food sources, the amount of food consumed, subtype-related factors, and source-related factors. Our results showed relative differences between Salmonella subtypes in their ability to cause disease. These differences presumably represent multiple factors, such as differences in survivability through the food chain and/or pathogenicity. The relative importance of the source-dependent factors varied considerably over the years, reflecting, among others, variability in the surveillance programs for the different animal sources. The presented model requires estimation of fewer parameters than a previously developed model, and thus allows for a better estimation of these factors to result in reported human disease. In addition, a comparison of the results of the same model using different sets of typing data revealed that the model can be applied to data with less discriminatory power, which is the only data available in many countries. In conclusion, the model allows for the estimation of relative differences between Salmonella subtypes and sources, providing results that will benefit future risk assessment or risk ranking purposes.

  8. Physics issues in diffraction limited storage ring design

    NASA Astrophysics Data System (ADS)

    Fan, Wei; Bai, ZhengHe; Gao, WeiWei; Feng, GuangYao; Li, WeiMin; Wang, Lin; He, DuoHui

    2012-05-01

    Diffraction limited electron storage ring is considered a promising candidate for future light sources, whose main characteristics are higher brilliance, better transverse coherence and better stability. The challenge of diffraction limited storage ring design is how to achieve the ultra low beam emittance with acceptable nonlinear performance. Effective linear and nonlinear parameter optimization methods based on Artificial Intelligence were developed for the storage ring physical design. As an example of application, partial physical design of HALS (Hefei Advanced Light Source), which is a diffraction limited VUV and soft X-ray light source, was introduced. Severe emittance growth due to the Intra Beam Scattering effect, which is the main obstacle to achieve ultra low emittance, was estimated quantitatively and possible cures were discussed. It is inspiring that better performance of diffraction limited storage ring can be achieved in principle with careful parameter optimization.

  9. Stochastic point-source modeling of ground motions in the Cascadia region

    USGS Publications Warehouse

    Atkinson, G.M.; Boore, D.M.

    1997-01-01

    A stochastic model is used to develop preliminary ground motion relations for the Cascadia region for rock sites. The model parameters are derived from empirical analyses of seismographic data from the Cascadia region. The model is based on a Brune point-source characterized by a stress parameter of 50 bars. The model predictions are compared to ground-motion data from the Cascadia region and to data from large earthquakes in other subduction zones. The point-source simulations match the observations from moderate events (M 100 km). The discrepancy at large magnitudes suggests further work on modeling finite-fault effects and regional attenuation is warranted. In the meantime, the preliminary equations are satisfactory for predicting motions from events of M < 7 and provide conservative estimates of motions from larger events at distances less than 100 km.

  10. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  11. Seismic source parameters of the induced seismicity at The Geysers geothermal area, California, by a generalized inversion approach

    NASA Astrophysics Data System (ADS)

    Picozzi, Matteo; Oth, Adrien; Parolai, Stefano; Bindi, Dino; De Landro, Grazia; Amoroso, Ortensia

    2017-04-01

    The accurate determination of stress drop, seismic efficiency and how source parameters scale with earthquake size is an important for seismic hazard assessment of induced seismicity. We propose an improved non-parametric, data-driven strategy suitable for monitoring induced seismicity, which combines the generalized inversion technique together with genetic algorithms. In the first step of the analysis the generalized inversion technique allows for an effective correction of waveforms for the attenuation and site contributions. Then, the retrieved source spectra are inverted by a non-linear sensitivity-driven inversion scheme that allows accurate estimation of source parameters. We therefore investigate the earthquake source characteristics of 633 induced earthquakes (ML 2-4.5) recorded at The Geysers geothermal field (California) by a dense seismic network (i.e., 32 stations of the Lawrence Berkeley National Laboratory Geysers/Calpine surface seismic network, more than 17.000 velocity records). We find for most of the events a non-selfsimilar behavior, empirical source spectra that requires ωγ source model with γ > 2 to be well fitted and small radiation efficiency ηSW. All these findings suggest different dynamic rupture processes for smaller and larger earthquakes, and that the proportion of high frequency energy radiation and the amount of energy required to overcome the friction or for the creation of new fractures surface changes with the earthquake size. Furthermore, we observe also two distinct families of events with peculiar source parameters that, in one case suggests the reactivation of deep structures linked to the regional tectonics, while in the other supports the idea of an important role of steeply dipping fault in the fluid pressure diffusion.

  12. Application of an extension of the MAI method to the Acapulco City, Mexico

    NASA Astrophysics Data System (ADS)

    Contreras, M.; Aguirre, J.

    2001-12-01

    The site effects and the source parameters, are inverted from a Fourier displacement spectra of seismograms that are corrected by geometrical spreading and regional attenuation valid for south center of Mexico(Ordaz and Singh, 1992). We used Genetic Algorithms (GA) to perform the non-linear inversion, like in the MAI method (Moya et al., 2000) . The GA have proved to produce better results than other traditional methods which are frequently trapped in a local minimum. GA is a method that mimics the evolution laws in living creatures. The best individuals reproduce and develop themselves with every generation. In our case each individual correspond to one source and the genes correspond to the source parameters. As in nature, the best source remain and are improved with each iteration. We assume that the site effect at each station are the same independently of the earthquake, because of that we can search for the combination of sources that can produce the smaller standard deviation of the estimated site effects from the different Fourier displacement earthquake spectra. Then we use the obtained site effects to generate a Fourier displacement spectra of an earthquake scenario. With this, we are able to compute the response spectra by means of random vibration theory (Reinoso et al., 1990). We apply this method to four stations located in the Acapulco City, Mexico, that recorded four earthquakes with epicenter located in the Guerrero Subduction Zone. The site effect estimated for one of the stations, called ACAZ, shows a good agreement with the estimated by Chávez-García et al. (1994) using spectral ratios between the ACAZ station and a rock reference site. Also we compare the response spectra from other earthquake, obtained by the former method and the response spectra computed using the acceleration record. We find an acceptable correlation between them. Chávez-García, J. Cuenca y M. Cárdenas (1994), "Estudio complementario de efectos de sitio en Acapulco, Guerrero", Informe técnico del Instituto de Ingeniería, UNAM ,proyecto 4503. Moya, A., J. Aguirre y K. Irikura (2000), "Inversion of Source Parameters and Site Effects from Strong Ground Motion Records using Genetic Algoritms", Bull. Seism. Soc. Am., 90, pp. 977-992. Ordaz, M. y S. K. Singh (1992), "Source spectra and spectral attenuation of seismic waves from Mexican earthquakes, and evidence of amplification in the hill zone of Mexico city", Bull. Seism. Soc. Am., 82, pp. 24-43. F. Reinoso, E. Ordaz, M. y Sanchez-Sesma, F. (1990), "A note on the fast computation of response spectra estimates", Earthquake Engineering and Structural Dynamics, 19, p. 971-976.

  13. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  14. Multiscale estimation of excess mass from gravity data

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Fedi, Maurizio; Florio, Giovanni

    2014-06-01

    We describe a multiscale method to estimate the excess mass of gravity anomaly sources, based on the theory of source moments. Using a multipole expansion of the potential field and considering only the data along the vertical direction, a system of linear equations is obtained. The choice of inverting data along a vertical profile can help us to reduce the interference effects due to nearby anomalies and will allow a local estimate of the source parameters. A criterion is established allowing the selection of the optimal highest altitude of the vertical profile data and truncation order of the series expansion. The inversion provides an estimate of the total anomalous mass and of the depth to the centre of mass. The method has several advantages with respect to classical methods, such as the Gauss' method: (i) we need just a 1-D inversion to obtain our estimates, being the inverted data sampled along a single vertical profile; (ii) the resolution may be straightforward enhanced by using vertical derivatives; (iii) the centre of mass is also estimated, besides the excess mass; (iv) the method is very robust versus noise; (v) the profile may be chosen in such a way to minimize the effects from interfering anomalies or from side effects due to the a limited area extension. The multiscale estimation of excess mass method can be successfully used in various fields of application. Here, we analyse the gravity anomaly generated by a sulphide body in the Skelleftea ore district, North Sweden, obtaining source mass and volume estimates in agreement with the known information. We show also that these estimates are substantially improved with respect to those obtained with the classical approach.

  15. In-duct identification of a rotating sound source with high spatial resolution

    NASA Astrophysics Data System (ADS)

    Heo, Yong-Ho; Ih, Jeong-Guon; Bodén, Hans

    2015-11-01

    To understand and reduce the flow noise generation from in-duct fluid machines, it is necessary to identify the acoustic source characteristics precisely. In this work, a source identification technique, which can identify the strengths and positions of the major sound radiators in the source plane, is studied for an in-duct rotating source. A linear acoustic theory including the effects of evanescent modes and source rotation is formulated based on the modal summation method, which is the underlying theory for the inverse source reconstruction. A validation experiment is conducted on a duct system excited by a loudspeaker in static and rotating conditions, with two different speeds, in the absence of flow. Due to the source rotation, the measured pressure spectra reveal the Doppler effect, and the amount of frequency shift corresponds to the multiplication of the circumferential mode order and the rotation speed. Amplitudes of participating modes are estimated at the shifted frequencies in the stationary reference frame, and the modal amplitude set including the effect of source rotation is collected to investigate the source behavior in the rotating reference frame. By using the estimated modal amplitudes, the near-field pressure is re-calculated and compared with the measured pressure. The obtained maximum relative error is about -25 and -10 dB for rotation speeds at 300 and 600 rev/min, respectively. The spatial distribution of acoustic source parameters is restored from the estimated modal amplitude set. The result clearly shows that the position and magnitude of the main sound source can be identified with high spatial resolution in the rotating reference frame.

  16. Blind Source Parameters for Performance Evaluation of Despeckling Filters.

    PubMed

    Biradar, Nagashettappa; Dewal, M L; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images.

  17. Estimation of dynamic rotor loads for the rotor systems research aircraft: Methodology development and validation

    NASA Technical Reports Server (NTRS)

    Duval, R. W.; Bahrami, M.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission systm from the fuselage. A mathematical model relating applied rotor loads and inertial loads of the rotor/transmission system to the load cell response is required to allow the load cells to be used to estimate rotor loads from flight data. Such a model is derived analytically by applying a force and moment balance to the isolated rotor/transmission system. The model is tested by comparing its estimated values of applied rotor loads with measured values obtained from a ground based shake test. Discrepancies in the comparison are used to isolate sources of unmodeled external loads. Once the structure of the mathematical model has been validated by comparison with experimental data, the parameters must be identified. Since the parameters may vary with flight condition it is desirable to identify the parameters directly from the flight data. A Maximum Likelihood identification algorithm is derived for this purpose and tested using a computer simulation of load cell data. The identification is found to converge within 10 samples. The rapid convergence facilitates tracking of time varying parameters of the load cell model in flight.

  18. Blind Source Parameters for Performance Evaluation of Despeckling Filters

    PubMed Central

    Biradar, Nagashettappa; Dewal, M. L.; Rohit, ManojKumar; Gowre, Sanjaykumar; Gundge, Yogesh

    2016-01-01

    The speckle noise is inherent to transthoracic echocardiographic images. A standard noise-free reference echocardiographic image does not exist. The evaluation of filters based on the traditional parameters such as peak signal-to-noise ratio, mean square error, and structural similarity index may not reflect the true filter performance on echocardiographic images. Therefore, the performance of despeckling can be evaluated using blind assessment metrics like the speckle suppression index, speckle suppression and mean preservation index (SMPI), and beta metric. The need for noise-free reference image is overcome using these three parameters. This paper presents a comprehensive analysis and evaluation of eleven types of despeckling filters for echocardiographic images in terms of blind and traditional performance parameters along with clinical validation. The noise is effectively suppressed using the logarithmic neighborhood shrinkage (NeighShrink) embedded with Stein's unbiased risk estimation (SURE). The SMPI is three times more effective compared to the wavelet based generalized likelihood estimation approach. The quantitative evaluation and clinical validation reveal that the filters such as the nonlocal mean, posterior sampling based Bayesian estimation, hybrid median, and probabilistic patch based filters are acceptable whereas median, anisotropic diffusion, fuzzy, and Ripplet nonlinear approximation filters have limited applications for echocardiographic images. PMID:27298618

  19. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; Gruen, D.; Bernstein, G. M.

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  20. Dark Energy Survey Year 1 Results: redshift distributions of the weak-lensing source galaxies

    DOE PAGES

    Hoyle, B.; Gruen, D.; Bernstein, G. M.; ...

    2018-04-18

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  1. Dark Energy Survey Year 1 Results: Redshift distributions of the weak lensing source galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyle, B.; et al.

    2017-08-04

    We describe the derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources in the Dark Energy Survey (DES) Year 1 cosmological analyses. The Bayesian Photometric Redshift (BPZ) code is used to assign galaxies to four redshift bins between z=0.2 and 1.3, and to produce initial estimates of the lensing-weighted redshift distributionsmore » $$n^i_{PZ}(z)$$ for bin i. Accurate determination of cosmological parameters depends critically on knowledge of $n^i$ but is insensitive to bin assignments or redshift errors for individual galaxies. The cosmological analyses allow for shifts $$n^i(z)=n^i_{PZ}(z-\\Delta z^i)$$ to correct the mean redshift of $n^i(z)$ for biases in $$n^i_{\\rm PZ}$$. The $$\\Delta z^i$$ are constrained by comparison of independently estimated 30-band photometric redshifts of galaxies in the COSMOS field to BPZ estimates made from the DES griz fluxes, for a sample matched in fluxes, pre-seeing size, and lensing weight to the DES weak-lensing sources. In companion papers, the $$\\Delta z^i$$ are further constrained by the angular clustering of the source galaxies around red galaxies with secure photometric redshifts at 0.15« less

  2. Bayesian Immunological Model Development from the Literature: Example Investigation of Recent Thymic Emigrants†

    PubMed Central

    Holmes, Tyson H.; Lewis, David B.

    2014-01-01

    Bayesian estimation techniques offer a systematic and quantitative approach for synthesizing data drawn from the literature to model immunological systems. As detailed here, the practitioner begins with a theoretical model and then sequentially draws information from source data sets and/or published findings to inform estimation of model parameters. Options are available to weigh these various sources of information differentially per objective measures of their corresponding scientific strengths. This approach is illustrated in depth through a carefully worked example for a model of decline in T-cell receptor excision circle content of peripheral T cells during development and aging. Estimates from this model indicate that 21 years of age is plausible for the developmental timing of mean age of onset of decline in T-cell receptor excision circle content of peripheral T cells. PMID:25179832

  3. The Approximate Bayesian Computation methods in the localization of the atmospheric contamination source

    NASA Astrophysics Data System (ADS)

    Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.

    2015-09-01

    In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.

  4. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty

  5. Weighted recalibration of the Rosetta pedotransfer model with improved estimates of hydraulic parameter distributions and summary statistics (Rosetta3)

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggen; Schaap, Marcel G.

    2017-04-01

    Pedotransfer functions (PTFs) have been widely used to predict soil hydraulic parameters in favor of expensive laboratory or field measurements. Rosetta (Schaap et al., 2001, denoted as Rosetta1) is one of many PTFs and is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method which allows the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), and their uncertainties. In this study, we present an improved set of hierarchical pedotransfer functions (Rosetta3) that unify the water retention and Ks submodels into one. Parameter uncertainty of the fit of the VG curve to the original retention data is used in the ANN calibration procedure to reduce bias of parameters predicted by the new PTF. One thousand bootstrap replicas were used to calibrate the new models compared to 60 or 100 in Rosetta1, thus allowing the uni-variate and bi-variate probability distributions of predicted parameters to be quantified in greater detail. We determined the optimal weights for VG parameters and Ks, the optimal number of hidden nodes in ANN, and the number of bootstrap replicas required for statistically stable estimates. Results show that matric potential-dependent bias was reduced significantly while root mean square error (RMSE) for water content were reduced modestly; RMSE for Ks was increased by 0.9% (H3w) to 3.3% (H5w) in the new models on log scale of Ks compared with the Rosetta1 model. It was found that estimated distributions of parameters were mildly non-Gaussian and could instead be described rather well with heavy-tailed α-stable distributions. On the other hand, arithmetic means had only a small estimation bias for most textures when compared with the mean-like "shift" parameter of the α-stable distributions. Arithmetic means and (co-)variances are therefore still recommended as summary statistics of the estimated distributions. However, it may be necessary to parameterize the distributions in different ways if the new estimates are used in stochastic analyses of vadose zone flow and transport. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code as well as additional documentation is available at: http://www.cals.arizona.edu/research/rosettav3.html.

  6. Estimation of Source and Attenuation Parameters from Ground Motion Observations for Induced Seismicity in Alberta

    NASA Astrophysics Data System (ADS)

    Novakovic, M.; Atkinson, G. M.

    2015-12-01

    We use a generalized inversion to solve for site response, regional source and attenuation parameters, in order to define a region-specific ground-motion prediction equation (GMPE) from ground motion observations in Alberta, following the method of Atkinson et al. (2015 BSSA). The database is compiled from over 200 small to moderate seismic events (M 1 to 4.2) recorded at ~50 regional stations (distances from 30 to 500 km), over the last few years; almost all of the events have been identified as being induced by oil and gas activity. We remove magnitude scaling and geometric spreading functions from observed ground motions and invert for stress parameter, regional attenuation and site amplification. Resolving these parameters allows for the derivation of a regionally-calibrated GMPE that can be used to accurately predict amplitudes across the region in real time, which is useful for ground-motion-based alerting systems and traffic light protocols. The derived GMPE has further applications for the evaluation of hazards from induced seismicity.

  7. Computation of probabilistic hazard maps and source parameter estimation for volcanic ash transport and dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madankan, R.; Pouget, S.; Singla, P., E-mail: psingla@buffalo.edu

    Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This papermore » presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.« less

  8. PyCoTools: A Python Toolbox for COPASI.

    PubMed

    Welsh, Ciaran M; Fullard, Nicola; Proctor, Carole J; Martinez-Guimera, Alvaro; Isfort, Robert J; Bascom, Charles C; Tasseff, Ryan; Przyborski, Stefan A; Shanley, Daryl P

    2018-05-22

    COPASI is an open source software package for constructing, simulating and analysing dynamic models of biochemical networks. COPASI is primarily intended to be used with a graphical user interface but often it is desirable to be able to access COPASI features programmatically, with a high level interface. PyCoTools is a Python package aimed at providing a high level interface to COPASI tasks with an emphasis on model calibration. PyCoTools enables the construction of COPASI models and the execution of a subset of COPASI tasks including time courses, parameter scans and parameter estimations. Additional 'composite' tasks which use COPASI tasks as building blocks are available for increasing parameter estimation throughput, performing identifiability analysis and performing model selection. PyCoTools supports exploratory data analysis on parameter estimation data to assist with troubleshooting model calibrations. We demonstrate PyCoTools by posing a model selection problem designed to show case PyCoTools within a realistic scenario. The aim of the model selection problem is to test the feasibility of three alternative hypotheses in explaining experimental data derived from neonatal dermal fibroblasts in response to TGF-β over time. PyCoTools is used to critically analyse the parameter estimations and propose strategies for model improvement. PyCoTools can be downloaded from the Python Package Index (PyPI) using the command 'pip install pycotools' or directly from GitHub (https://github.com/CiaranWelsh/pycotools). Documentation at http://pycotools.readthedocs.io. Supplementary data are available at Bioinformatics.

  9. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  10. Automated Source Depth Estimation Using Array Processing Techniques

    DTIC Science & Technology

    2009-10-14

    station for each depth cell , whose width is a user defined parameter, n [Bonner et al., 2002; Murphy et al. 1999]. The largest peak in the stack...Columbia University ATTN: Dr. Paul Richards Lamont-Doherty Earth Observatory Route 9W Palisades NY 10964 University of California, Davis ATTN

  11. Experimental Method Development for Estimating Solid-phase Diffusion Coefficients and Material/Air Partition Coefficients of SVOCs

    EPA Science Inventory

    The solid-phase diffusion coefficient (Dm) and material-air partition coefficient (Kma) are key parameters for characterizing the sources and transport of semivolatile organic compounds (SVOCs) in the indoor environment. In this work, a new experimental method was developed to es...

  12. What drives uncertainty in model diagnoses of carbon dynamics in southern US forests: climate, vegetation, disturbance, or model parameters?

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Gu, H.; Williams, C. A.

    2017-12-01

    Results from terrestrial carbon cycle models have multiple sources of uncertainty, each with its behavior and range. Their relative importance and how they combine has received little attention. This study investigates how various sources of uncertainty propagate, temporally and spatially, in CASA-Disturbance (CASA-D). CASA-D simulates the impact of climatic forcing and disturbance legacies on forest carbon dynamics with the following steps. Firstly, we infer annual growth and mortality rates from measured biomass stocks (FIA) over time and disturbance (e.g., fire, harvest, bark beetle) to represent annual post-disturbance carbon fluxes trajectories across forest types and site productivity settings. Then, annual carbon fluxes are estimated from these trajectories by using time since disturbance which is inferred from biomass (NBCD 2000) and disturbance maps (NAFD, MTBS and ADS). Finally, we apply monthly climatic scalars derived from default CASA to temporally distribute annual carbon fluxes to each month. This study assesses carbon flux uncertainty from two sources: driving data including climatic and forest biomass inputs, and three most sensitive parameters in CASA-D including maximum light use efficiency, temperature sensitivity of soil respiration (Q10) and optimum temperature identified by using EFAST (Extended Fourier Amplitude Sensitivity Testing). We quantify model uncertainties from each, and report their relative importance in estimating forest carbon sink/source in southeast United States from 2003 to 2010.

  13. Validation and calibration of structural models that combine information from multiple sources.

    PubMed

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  14. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  15. Source term model evaluations for the low-level waste facility performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, M.S.; Su, S.I.

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  16. Class Enumeration and Parameter Recovery of Growth Mixture Modeling and Second-Order Growth Mixture Modeling in the Presence of Measurement Noninvariance between Latent Classes

    PubMed Central

    Kim, Eun Sook; Wang, Yan

    2017-01-01

    Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691

  17. Estimating the Relevance of World Disturbances to Explain Savings, Interference and Long-Term Motor Adaptation Effects

    PubMed Central

    Berniker, Max; Kording, Konrad P.

    2011-01-01

    Recent studies suggest that motor adaptation is the result of multiple, perhaps linear processes each with distinct time scales. While these models are consistent with some motor phenomena, they can neither explain the relatively fast re-adaptation after a long washout period, nor savings on a subsequent day. Here we examined if these effects can be explained if we assume that the CNS stores and retrieves movement parameters based on their possible relevance. We formalize this idea with a model that infers not only the sources of potential motor errors, but also their relevance to the current motor circumstances. In our model adaptation is the process of re-estimating parameters that represent the body and the world. The likelihood of a world parameter being relevant is then based on the mismatch between an observed movement and that predicted when not compensating for the estimated world disturbance. As such, adapting to large motor errors in a laboratory setting should alert subjects that disturbances are being imposed on them, even after motor performance has returned to baseline. Estimates of this external disturbance should be relevant both now and in future laboratory settings. Estimated properties of our bodies on the other hand should always be relevant. Our model demonstrates savings, interference, spontaneous rebound and differences between adaptation to sudden and gradual disturbances. We suggest that many issues concerning savings and interference can be understood when adaptation is conditioned on the relevance of parameters. PMID:21998574

  18. Assessment and modeling of the groundwater hydrogeochemical quality parameters via geostatistical approaches

    NASA Astrophysics Data System (ADS)

    Karami, Shawgar; Madani, Hassan; Katibeh, Homayoon; Fatehi Marj, Ahmad

    2018-03-01

    Geostatistical methods are one of the advanced techniques used for interpolation of groundwater quality data. The results obtained from geostatistics will be useful for decision makers to adopt suitable remedial measures to protect the quality of groundwater sources. Data used in this study were collected from 78 wells in Varamin plain aquifer located in southeast of Tehran, Iran, in 2013. Ordinary kriging method was used in this study to evaluate groundwater quality parameters. According to what has been mentioned in this paper, seven main quality parameters (i.e. total dissolved solids (TDS), sodium adsorption ratio (SAR), electrical conductivity (EC), sodium (Na+), total hardness (TH), chloride (Cl-) and sulfate (SO4 2-)), have been analyzed and interpreted by statistical and geostatistical methods. After data normalization by Nscore method in WinGslib software, variography as a geostatistical tool to define spatial regression was compiled and experimental variograms were plotted by GS+ software. Then, the best theoretical model was fitted to each variogram based on the minimum RSS. Cross validation method was used to determine the accuracy of the estimated data. Eventually, estimation maps of groundwater quality were prepared in WinGslib software and estimation variance map and estimation error map were presented to evaluate the quality of estimation in each estimated point. Results showed that kriging method is more accurate than the traditional interpolation methods.

  19. Global Source Parameters from Regional Spectral Ratios for Yield Transportability Studies

    NASA Astrophysics Data System (ADS)

    Phillips, W. S.; Fisk, M. D.; Stead, R. J.; Begnaud, M. L.; Rowe, C. A.

    2016-12-01

    We use source parameters such as moment, corner frequency and high frequency rolloff as constraints in amplitude tomography, ensuring that spectra of well-studied earthquakes are recovered using the ensuing attenuation and site term model. We correct explosion data for path and site effects using such models, which allows us to test transportability of yield estimation techniques based on our best source spectral estimates. To develop a background set of source parameters, we applied spectral ratio techniques to envelopes of a global set of regional distance recordings from over 180,000 crustal events. Corner frequencies and moment ratios were determined via inversion using all event pairs within predetermined clusters, shifting to absolute levels using independently determined regional and teleseismic moments. The moment and corner frequency results can be expressed as stress drop, which has considerable scatter, yet shows dramatic regional patterns. We observe high stress in subduction zones along S. America, S. Mexico, the Banda Sea, and associated with the Yakutat Block in Alaska. We also observe high stress at the Himalayan syntaxes, the Pamirs, eastern Iran, the Caspian, the Altai-Sayan, and the central African rift. Low stress is observed along mid ocean spreading centers, the Afar rift, patches of convergence zones such as Nicaragua, the Zagros, Tibet, and the Tien Shan, among others. Mine blasts appear as low stress events due to their low corners and steep rolloffs. Many of these anomalies have been noted by previous studies, and we plan to compare results directly. As mentioned, these results will be used to constrain tomographic imaging, but can also be used in model validation procedures similar to the use of ground truth in location problems, and, perhaps most importantly, figure heavily in quality control of local and regional distance amplitude measurements.

  20. Impact of Submarine Groundwater Discharge on Marine Water Quality and Reef Biota of Maui

    PubMed Central

    Bishop, James M.

    2016-01-01

    Generally unseen and infrequently measured, submarine groundwater discharge (SGD) can transport potentially large loads of nutrients and other land-based contaminants to coastal ecosystems. To examine this linkage we employed algal bioassays, benthic community analysis, and geochemical methods to examine water quality and community parameters of nearshore reefs adjacent to a variety of potential, land-based nutrient sources on Maui. Three common reef algae, Acanthophora spicifera, Hypnea musciformis, and Ulva spp. were collected and/or deployed at six locations with SGD. Algal tissue nitrogen (N) parameters (δ15N, N %, and C:N) were compared with nutrient and δ15N-nitrate values of coastal groundwater and nearshore surface water at all locations. Benthic community composition was estimated for ten 10-m transects per location. Reefs adjacent to sugarcane farms had the greatest abundance of macroalgae, low species diversity, and the highest concentrations of N in algal tissues, coastal groundwater, and marine surface waters compared to locations with low anthropogenic impact. Based on δ15N values of algal tissues, we estimate ca. 0.31 km2 of Kahului Bay is impacted by effluent injected underground at the Kahului Wastewater Reclamation Facility (WRF); this region is barren of corals and almost entirely dominated by colonial zoanthids. Significant correlations among parameters of algal tissue N with adjacent surface and coastal groundwater N indicate that these bioassays provided a useful measure of nutrient source and loading. A conceptual model that uses Ulva spp. tissue δ15N and N % to identify potential N source(s) and relative N loading is proposed for Hawaiʻi. These results indicate that SGD can be a significant transport pathway for land-based nutrients with important biogeochemical and ecological implications in tropical, oceanic islands. PMID:27812171

  1. Impact of Submarine Groundwater Discharge on Marine Water Quality and Reef Biota of Maui.

    PubMed

    Amato, Daniel W; Bishop, James M; Glenn, Craig R; Dulai, Henrietta; Smith, Celia M

    2016-01-01

    Generally unseen and infrequently measured, submarine groundwater discharge (SGD) can transport potentially large loads of nutrients and other land-based contaminants to coastal ecosystems. To examine this linkage we employed algal bioassays, benthic community analysis, and geochemical methods to examine water quality and community parameters of nearshore reefs adjacent to a variety of potential, land-based nutrient sources on Maui. Three common reef algae, Acanthophora spicifera, Hypnea musciformis, and Ulva spp. were collected and/or deployed at six locations with SGD. Algal tissue nitrogen (N) parameters (δ15N, N %, and C:N) were compared with nutrient and δ15N-nitrate values of coastal groundwater and nearshore surface water at all locations. Benthic community composition was estimated for ten 10-m transects per location. Reefs adjacent to sugarcane farms had the greatest abundance of macroalgae, low species diversity, and the highest concentrations of N in algal tissues, coastal groundwater, and marine surface waters compared to locations with low anthropogenic impact. Based on δ15N values of algal tissues, we estimate ca. 0.31 km2 of Kahului Bay is impacted by effluent injected underground at the Kahului Wastewater Reclamation Facility (WRF); this region is barren of corals and almost entirely dominated by colonial zoanthids. Significant correlations among parameters of algal tissue N with adjacent surface and coastal groundwater N indicate that these bioassays provided a useful measure of nutrient source and loading. A conceptual model that uses Ulva spp. tissue δ15N and N % to identify potential N source(s) and relative N loading is proposed for Hawai'i. These results indicate that SGD can be a significant transport pathway for land-based nutrients with important biogeochemical and ecological implications in tropical, oceanic islands.

  2. Quasar spectral variability from the XMM-Newton serendipitous source catalogue

    NASA Astrophysics Data System (ADS)

    Serafinelli, R.; Vagnetti, F.; Middei, R.

    2017-04-01

    Context. X-ray spectral variability analyses of active galactic nuclei (AGN) with moderate luminosities and redshifts typically show a "softer when brighter" behaviour. Such a trend has rarely been investigated for high-luminosity AGNs (Lbol ≳ 1044 erg/s), nor for a wider redshift range (e.g. 0 ≲ z ≲ 5). Aims: We present an analysis of spectral variability based on a large sample of 2700 quasars, measured at several different epochs, extracted from the fifth release of the XMM-Newton Serendipitous Source Catalogue. Methods: We quantified the spectral variability through the parameter β defined as the ratio between the change in the photon index Γ and the corresponding logarithmic flux variation, β = -ΔΓ/Δlog FX. Results: Our analysis confirms a softer when brighter behaviour for our sample, extending the previously found general trend to high luminosity and redshift. We estimate an ensemble value of the spectral variability parameter β = -0.69 ± 0.03. We do not find dependence of β on redshift, X-ray luminosity, black hole mass or Eddington ratio. A subsample of radio-loud sources shows a smaller spectral variability parameter. There is also some change with the X-ray flux, with smaller β (in absolute value) for brighter sources. We also find significant correlations for a small number of individual sources, indicating more negative values for some sources.

  3. Bottom friction optimization for a better barotropic tide modelling

    NASA Astrophysics Data System (ADS)

    Boutet, Martial; Lathuilière, Cyril; Son Hoang, Hong; Baraille, Rémy

    2015-04-01

    At a regional scale, barotropic tides are the dominant source of variability of currents and water heights. A precise representation of these processes is essential because of their great impacts on human activities (submersion risks, marine renewable energies, ...). Identified sources of error for tide modelling at a regional scale are the followings: bathymetry, boundary forcing and dissipation due to bottom friction. Nevertheless, bathymetric databases are nowadays known with a good accuracy, especially over shelves, and global tide models performances are better than ever. The most promising improvement is thus the bottom friction representation. The method used to estimate bottom friction is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of tangent linear and adjoint version of the circulation model. Experiments are carried out to estimate bottom friction with the HYbrid Coordinate Ocean Model (HYCOM) in barotropic mode (one isopycnal layer). The study area is the Northeastern Atlantic margin which is characterized by strong currents and an intense dissipation. Bottom friction is parameterized with a quadratic term and friction coefficient is computed with the water height and the bottom roughness. The latter parameter is the one to be estimated. Assimilated data are the available tide gauge observations. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Then, it is estimated with geographical degrees of freedom. Finally, the impact of the estimation of a mixed quadratic/linear friction is evaluated.

  4. Integration of measurements with atmospheric dispersion models: Source term estimation for dispersal of (239)Pu due to non-nuclear detonation of high explosive

    NASA Astrophysics Data System (ADS)

    Edwards, L. L.; Harvey, T. F.; Freis, R. P.; Pitovranov, S. E.; Chernokozhin, E. V.

    1992-10-01

    The accuracy associated with assessing the environmental consequences of an accidental release of radioactivity is highly dependent on our knowledge of the source term characteristics and, in the case when the radioactivity is condensed on particles, the particle size distribution, all of which are generally poorly known. This paper reports on the development of a numerical technique that integrates the radiological measurements with atmospheric dispersion modeling. This results in a more accurate particle-size distribution and particle injection height estimation when compared with measurements of high explosive dispersal of (239)Pu. The estimation model is based on a non-linear least squares regression scheme coupled with the ARAC three-dimensional atmospheric dispersion models. The viability of the approach is evaluated by estimation of ADPIC model input parameters such as the ADPIC particle size mean aerodynamic diameter, the geometric standard deviation, and largest size. Additionally we estimate an optimal 'coupling coefficient' between the particles and an explosive cloud rise model. The experimental data are taken from the Clean Slate 1 field experiment conducted during 1963 at the Tonopah Test Range in Nevada. The regression technique optimizes the agreement between the measured and model predicted concentrations of (239)Pu by varying the model input parameters within their respective ranges of uncertainties. The technique generally estimated the measured concentrations within a factor of 1.5, with the worst estimate being within a factor of 5, very good in view of the complexity of the concentration measurements, the uncertainties associated with the meteorological data, and the limitations of the models. The best fit also suggest a smaller mean diameter and a smaller geometric standard deviation on the particle size as well as a slightly weaker particle to cloud coupling than previously reported.

  5. Locating and Modeling Regional Earthquakes with Broadband Waveform Data

    NASA Astrophysics Data System (ADS)

    Tan, Y.; Zhu, L.; Helmberger, D.

    2003-12-01

    Retrieving source parameters of small earthquakes (Mw < 4.5), including mechanism, depth, location and origin time, relies on local and regional seismic data. Although source characterization for such small events achieves a satisfactory stage in some places with a dense seismic network, such as TriNet, Southern California, a worthy revisit to the historical events in these places or an effective, real-time investigation of small events in many other places, where normally only a few local waveforms plus some short-period recordings are available, is still a problem. To address this issue, we introduce a new type of approach that estimates location, depth, origin time and fault parameters based on 3-component waveform matching in terms of separated Pnl, Rayleigh and Love waves. We show that most local waveforms can be well modeled by a regionalized 1-D model plus different timing corrections for Pnl, Rayleigh and Love waves at relatively long periods, i.e., 4-100 sec for Pnl, and 8-100 sec for surface waves, except for few anomalous paths involving greater structural complexity, meanwhile, these timing corrections reveal similar azimuthal patterns for well-located cluster events, despite their different focal mechanisms. Thus, we can calibrate the paths separately for Pnl, Rayleigh and Love waves with the timing corrections from well-determined events widely recorded by a dense modern seismic network or a temporary PASSCAL experiment. In return, we can locate events and extract their fault parameters by waveform matching for available waveform data, which could be as less as from two stations, assuming timing corrections from the calibration. The accuracy of the obtained source parameters is subject to the error carried by the events used for the calibration. The detailed method requires a Green­_s function library constructed from a regionalized 1-D model together with necessary calibration information, and adopts a grid search strategy for both hypercenter and focal mechanism. We show that the whole process can be easily automated and routinely provide reliable source parameter estimates with a couple of broadband stations. Two applications in the Tibet Plateau and Southern California will be presented along with comparisons of results against other methods.

  6. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  7. SED Modeling of 20 Massive Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Tanti, Kamal Kumar

    In this paper, we present the spectral energy distributions (SEDs) modeling of twenty massive young stellar objects (MYSOs) and subsequently estimated different physical and structural/geometrical parameters for each of the twenty central YSO outflow candidates, along with their associated circumstellar disks and infalling envelopes. The SEDs for each of the MYSOs been reconstructed by using 2MASS, MSX, IRAS, IRAC & MIPS, SCUBA, WISE, SPIRE and IRAM data, with the help of a SED Fitting Tool, that uses a grid of 2D radiative transfer models. Using the detailed analysis of SEDs and subsequent estimation of physical and geometrical parameters for the central YSO sources along with its circumstellar disks and envelopes, the cumulative distribution of the stellar, disk and envelope parameters can be analyzed. This leads to a better understanding of massive star formation processes in their respective star forming regions in different molecular clouds.

  8. Dynamic Source Inversion of a M6.5 Intraslab Earthquake in Mexico: Application of a New Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Díaz-Mojica, J. J.; Cruz-Atienza, V. M.; Madariaga, R.; Singh, S. K.; Iglesias, A.

    2013-05-01

    We introduce a novel approach for imaging the earthquakes dynamics from ground motion records based on a parallel genetic algorithm (GA). The method follows the elliptical dynamic-rupture-patch approach introduced by Di Carli et al. (2010) and has been carefully verified through different numerical tests (Díaz-Mojica et al., 2012). Apart from the five model parameters defining the patch geometry, our dynamic source description has four more parameters: the stress drop inside the nucleation and the elliptical patches; and two friction parameters, the slip weakening distance and the change of the friction coefficient. These parameters are constant within the rupture surface. The forward dynamic source problem, involved in the GA inverse method, uses a highly accurate computational solver for the problem, namely the staggered-grid split-node. The synthetic inversion presented here shows that the source model parameterization is suitable for the GA, and that short-scale source dynamic features are well resolved in spite of low-pass filtering of the data for periods comparable to the source duration. Since there is always uncertainty in the propagation medium as well as in the source location and the focal mechanisms, we have introduced a statistical approach to generate a set of solution models so that the envelope of the corresponding synthetic waveforms explains as much as possible the observed data. We applied the method to the 2012 Mw6.5 intraslab Zumpango, Mexico earthquake and determined several fundamental source parameters that are in accordance with different and completely independent estimates for Mexican and worldwide earthquakes. Our weighted-average final model satisfactorily explains eastward rupture directivity observed in the recorded data. Some parameters found for the Zumpango earthquake are: Δτ = 30.2+/-6.2 MPa, Er = 0.68+/-0.36x10^15 J, G = 1.74+/-0.44x10^15 J, η = 0.27+/-0.11, Vr/Vs = 0.52+/-0.09 and Mw = 6.64+/-0.07; for the stress drop, radiated energy, fracture energy, radiation efficiency, rupture velocity and moment magnitude, respectively. Mw6.5 intraslab Zumpango earthquake location, stations location and tectonic setting in central Mexico

  9. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  10. Magnitude and Rupture Area Scaling Relationships of Seismicity at The Northwest Geysers EGS Demonstration Project

    NASA Astrophysics Data System (ADS)

    Dreger, D. S.; Boyd, O. S.; Taira, T.; Gritto, R.

    2017-12-01

    Enhanced Geothermal System (EGS) resource development requires knowledge of subsurface physical parameters to quantify the evolution of fracture networks. Spatio-temporal source properties, including source dimension, rupture area, slip, rupture speed, and slip velocity of induced seismicity are of interest at The Geysers geothermal field, northern California to map the coseismic facture density of the EGS swarm. In this investigation we extend our previous finite-source analysis of selected M>4 earthquakes to examine source properties of smaller magnitude seismicity located in the Northwest Geysers Enhanced Geothermal System (EGS) demonstration project. Moment rate time histories of the source are found using empirical Green's function (eGf) deconvolution using the method of Mori (1993) as implemented by Dreger et al. (2007). The moment rate functions (MRFs) from data recorded using the Lawrence Berkeley National Laboratory (LBNL) short-period geophone network are inverted for finite-source parameters including the spatial distribution of fault slip, rupture velocity, and the orientation of the causative fault plane. The results show complexity in the MRF for the studied earthquakes. Thus far the estimated rupture area and the magnitude-area trend of the smaller magnitude Geysers seismicity is found to agree with the empirical relationships of Wells and Coppersmith (1994) and Leonard (2010), which were developed for much larger M>5.5 earthquakes worldwide indicating self-similar behavior extending to M2 earthquakes. We will present finite-source inversion results of the micro-earthquakes, attempting to extend the analysis to sub Mw, and demonstrate their magnitude-area scaling. The extension of the scaling laws will then enable the mapping of coseismic fracture density of the EGS swarm in the Northwest Geysers based on catalog moment magnitude estimates.

  11. Estimating stage-specific daily survival probabilities of nests when nest age is unknown

    USGS Publications Warehouse

    Stanley, T.R.

    2004-01-01

    Estimation of daily survival probabilities of nests is common in studies of avian populations. Since the introduction of Mayfield's (1961, 1975) estimator, numerous models have been developed to relax Mayfield's assumptions and account for biologically important sources of variation. Stanley (2000) presented a model for estimating stage-specific (e.g. incubation stage, nestling stage) daily survival probabilities of nests that conditions on “nest type” and requires that nests be aged when they are found. Because aging nests typically requires handling the eggs, there may be situations where nests can not or should not be aged and the Stanley (2000) model will be inapplicable. Here, I present a model for estimating stage-specific daily survival probabilities that conditions on nest stage for active nests, thereby obviating the need to age nests when they are found. Specifically, I derive the maximum likelihood function for the model, evaluate the model's performance using Monte Carlo simulations, and provide software for estimating parameters (along with an example). For sample sizes as low as 50 nests, bias was small and confidence interval coverage was close to the nominal rate, especially when a reduced-parameter model was used for estimation.

  12. Accounting for nonsampling error in estimates of HIV epidemic trends from antenatal clinic sentinel surveillance

    PubMed Central

    Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objectives The aim of the study was to propose and demonstrate an approach to allow additional nonsampling uncertainty about HIV prevalence measured at antenatal clinic sentinel surveillance (ANC-SS) in model-based inferences about trends in HIV incidence and prevalence. Design Mathematical model fitted to surveillance data with Bayesian inference. Methods We introduce a variance inflation parameter σinfl2 that accounts for the uncertainty of nonsampling errors in ANC-SS prevalence. It is additive to the sampling error variance. Three approaches are tested for estimating σinfl2 using ANC-SS and household survey data from 40 subnational regions in nine countries in sub-Saharan, as defined in UNAIDS 2016 estimates. Methods were compared using in-sample fit and out-of-sample prediction of ANC-SS data, fit to household survey prevalence data, and the computational implications. Results Introducing the additional variance parameter σinfl2 increased the error variance around ANC-SS prevalence observations by a median of 2.7 times (interquartile range 1.9–3.8). Using only sampling error in ANC-SS prevalence ( σinfl2=0), coverage of 95% prediction intervals was 69% in out-of-sample prediction tests. This increased to 90% after introducing the additional variance parameter σinfl2. The revised probabilistic model improved model fit to household survey prevalence and increased epidemic uncertainty intervals most during the early epidemic period before 2005. Estimating σinfl2 did not increase the computational cost of model fitting. Conclusions: We recommend estimating nonsampling error in ANC-SS as an additional parameter in Bayesian inference using the Estimation and Projection Package model. This approach may prove useful for incorporating other data sources such as routine prevalence from Prevention of mother-to-child transmission testing into future epidemic estimates. PMID:28296801

  13. A new approach to estimating trends in chlamydia incidence.

    PubMed

    Ali, Hammad; Cameron, Ewan; Drovandi, Christopher C; McCaw, James M; Guy, Rebecca J; Middleton, Melanie; El-Hayek, Carol; Hocking, Jane S; Kaldor, John M; Donovan, Basil; Wilson, David P

    2015-11-01

    Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method for estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shapes of these beta parameters were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour small changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures, that is, prevalence and incidence as reported by other studies. Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ∼120% over 12 years. Nationally, an estimated 356 000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (∼90% increase). We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. Uncertainty estimations for moment tensor inversions: the issue of the 2012 May 20 Emilia earthquake

    NASA Astrophysics Data System (ADS)

    Scognamiglio, Laura; Magnoni, Federica; Tinti, Elisa; Casarotti, Emanuele

    2016-08-01

    Seismic moment tensor is one of the most important source parameters defining the earthquake dimension and style of the activated fault. Geoscientists ordinarily use moment tensor catalogues, however, few attempts have been done to assess possible impacts of moment magnitude uncertainties upon their analysis. The 2012 May 20 Emilia main shock is a representative event since it is defined in literature with a moment magnitude value (Mw) spanning between 5.63 and 6.12. A variability of ˜0.5 units in magnitude leads to a controversial knowledge of the real size of the event and reveals how the solutions could be poorly constrained. In this work, we investigate the stability of the moment tensor solution for this earthquake, studying the effect of five different 1-D velocity models, the number and the distribution of the stations used in the inversion procedure. We also introduce a 3-D velocity model to account for structural heterogeneity. We finally estimate the uncertainties associated to the computed focal planes and the obtained Mw. We conclude that our reliable source solutions provide a moment magnitude that ranges from 5.87, 1-D model, to 5.96, 3-D model, reducing the variability of the literature to ˜0.1. We endorse that the estimate of seismic moment from moment tensor solutions, as well as the estimate of the other kinematic source parameters, requires coming out with disclosed assumptions and explicit processing workflows. Finally and, probably more important, when moment tensor solution is used for secondary analyses it has to be combined with the same main boundary conditions (e.g. wave-velocity propagation model) to avoid conflicting results.

  15. Teleseismic Body Wave Analysis for the 27 September 2003 Altai, Earthquake (Mw7.4) and Large Aftershocks

    NASA Astrophysics Data System (ADS)

    Gomez-Gonzalez, J. M.; Mellors, R.

    2007-05-01

    We investigate the kinematics of the rupture process for the September 27, 2003, Mw7.3, Altai earthquake and its associated large aftershocks. This is the largest earthquake striking the Altai mountains within the last 50 years, which provides important constraints on the ongoing tectonics. The fault plane solution obtained by teleseismic body waveform modeling indicated a predominantly strike-slip event (strike=130, dip=75, rake 170), Scalar moment for the main shock ranges from 0.688 to 1.196E+20 N m, a source duration of about 20 to 42 s, and an average centroid depth of 10 km. Source duration would indicate a fault length of about 130 - 270 km. The main shock was followed closely by two aftershocks (Mw5.7, Mw6.4) occurred the same day, another aftershock (Mw6.7) occurred on 1 October , 2003. We also modeled the second aftershock (Mw6.4) to asses geometric similarities during their respective rupture process. This aftershock occurred spatially very close to the mainshock and possesses a similar fault plane solution (strike=128, dip=71, rake=154), and centroid depth (13 km). Several local conditions, such as the crustal model and fault geometry, affect the correct estimation of some source parameters. We perfume a sensitivity evaluation of several parameters, including centroid depth, scalar moment and source duration, based on a point and finite source modeling. The point source approximation results are the departure parameters for the finite source exploration. We evaluate the different reported parameters to discard poor constrained models. In addition, deformation data acquired by InSAR are also included in the analysis.

  16. Deformation data modeling through numerical models: an efficient method for tracking magma transport

    NASA Astrophysics Data System (ADS)

    Charco, M.; Gonzalez, P. J.; Galán del Sastre, P.

    2017-12-01

    Nowadays, multivariate collected data and robust physical models at volcano observatories are becoming crucial for providing effective volcano monitoring. Nevertheless, the forecast of volcanic eruption is notoriously difficult. Wthin this frame one of the most promising methods to evaluate the volcano hazard is the use of surface ground deformation and in the last decades many developments in the field of deformation modeling has been achieved. In particular, numerical modeling allows realistic media features such as topography and crustal heterogeneities to be included, although it is still very time cosuming to solve the inverse problem for near-real time interpretations. Here, we present a method that can be efficiently used to estimate the location and evolution of magmatic sources base on real-time surface deformation data and Finite Element (FE) models. Generally, the search for the best-fitting magmatic (point) source(s) is conducted for an array of 3-D locations extending below a predefined volume region and the Green functions for all the array components have to be precomputed. We propose a FE model for the pre-computation of Green functions in a mechanically heterogeneous domain which eventually will lead to a better description of the status of the volcanic area. The number of Green functions is reduced here to the number of observational points by using their reciprocity relationship. We present and test this methodology with an optimization method base on a Genetic Algorithm. Following synthetic and sensitivity test to estimate the uncertainty of the model parameters, we apply the tool for magma tracking during 2007 Kilauea volcano intrusion and eruption. We show how data inversion with numerical models can speed up the source parameters estimations for a given volcano showing signs of unrest.

  17. Variable anelastic attenuation and site effect in estimating source parameters of various major earthquakes including M w 7.8 Nepal and M w 7.5 Hindu kush earthquake by using far-field strong-motion data

    NASA Astrophysics Data System (ADS)

    Kumar, Naresh; Kumar, Parveen; Chauhan, Vishal; Hazarika, Devajit

    2017-10-01

    Strong-motion records of recent Gorkha Nepal earthquake ( M w 7.8), its strong aftershocks and seismic events of Hindu kush region have been analysed for estimation of source parameters. The M w 7.8 Gorkha Nepal earthquake of 25 April 2015 and its six aftershocks of magnitude range 5.3-7.3 are recorded at Multi-Parametric Geophysical Observatory, Ghuttu, Garhwal Himalaya (India) >600 km west from the epicentre of main shock of Gorkha earthquake. The acceleration data of eight earthquakes occurred in the Hindu kush region also recorded at this observatory which is located >1000 km east from the epicentre of M w 7.5 Hindu kush earthquake on 26 October 2015. The shear wave spectra of acceleration record are corrected for the possible effects of anelastic attenuation at both source and recording site as well as for site amplification. The strong-motion data of six local earthquakes are used to estimate the site amplification and the shear wave quality factor ( Q β) at recording site. The frequency-dependent Q β( f) = 124 f 0.98 is computed at Ghuttu station by using inversion technique. The corrected spectrum is compared with theoretical spectrum obtained from Brune's circular model for the horizontal components using grid search algorithm. Computed seismic moment, stress drop and source radius of the earthquakes used in this work range 8.20 × 1016-5.72 × 1020 Nm, 7.1-50.6 bars and 3.55-36.70 km, respectively. The results match with the available values obtained by other agencies.

  18. V2676 Oph: Estimating Physical Parameters of a Moderately Fast Nova

    NASA Astrophysics Data System (ADS)

    Raj, A.; Pavana, M.; Kamath, U. S.; Anupama, G. C.; Walter, F. M.

    2018-03-01

    Using our previously reported observations, we derive some physical parameters of the moderately fast nova V2676 Oph 2012 #1. The best-fit Cloudy model of the nebular spectrum obtained on 2015 May 8 shows a hot white dwarf source with TBB≍1.0×105 K having a luminosity of 1.0×1038 erg/s. Our abundance analysis shows that the ejecta are significantly enhanced relative to solar, He/H=2.14, O/H=2.37, S/H=6.62 and Ar/H=3.25. The ejecta mass is estimated to be 1.42×10-5 M⊙. The nova showed a pronounced dust formation phase after 90 d from discovery. The J-H and H-K colors were very large as compared to other molecule- and dust-forming novae in recent years. The dust temperature and mass at two epochs have been estimated from spectral energy distribution fits to infrared photometry.

  19. Accounting for measurement error in log regression models with applications to accelerated testing.

    PubMed

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  20. Statistical methods for thermonuclear reaction rates and nucleosynthesis simulations

    NASA Astrophysics Data System (ADS)

    Iliadis, Christian; Longland, Richard; Coc, Alain; Timmes, F. X.; Champagne, Art E.

    2015-03-01

    Rigorous statistical methods for estimating thermonuclear reaction rates and nucleosynthesis are becoming increasingly established in nuclear astrophysics. The main challenge being faced is that experimental reaction rates are highly complex quantities derived from a multitude of different measured nuclear parameters (e.g., astrophysical S-factors, resonance energies and strengths, particle and γ-ray partial widths). We discuss the application of the Monte Carlo method to two distinct, but related, questions. First, given a set of measured nuclear parameters, how can one best estimate the resulting thermonuclear reaction rates and associated uncertainties? Second, given a set of appropriate reaction rates, how can one best estimate the abundances from nucleosynthesis (i.e., reaction network) calculations? The techniques described here provide probability density functions that can be used to derive statistically meaningful reaction rates and final abundances for any desired coverage probability. Examples are given for applications to s-process neutron sources, core-collapse supernovae, classical novae, and Big Bang nucleosynthesis.

  1. BIASES IN PHYSICAL PARAMETER ESTIMATES THROUGH DIFFERENTIAL LENSING MAGNIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Er Xinzhong; Ge Junqiang; Mao Shude, E-mail: xer@nao.cas.cn

    2013-06-20

    We study the lensing magnification effect on background galaxies. Differential magnification due to different magnifications of different source regions of a galaxy will change the lensed composite spectra. The derived properties of the background galaxies are therefore biased. For simplicity, we model galaxies as a superposition of an axis-symmetric bulge and a face-on disk in order to study the differential magnification effect on the composite spectra. We find that some properties derived from the spectra (e.g., velocity dispersion, star formation rate, and metallicity) are modified. Depending on the relative positions of the source and the lens, the inferred results canmore » be either over- or underestimates of the true values. In general, for an extended source at strong lensing regions with high magnifications, the inferred physical parameters (e.g., metallicity) can be strongly biased. Therefore, detailed lens modeling is necessary to obtain the true properties of the lensed galaxies.« less

  2. A mixing-model approach to quantifying sources of organic matter to salt marsh sediments

    NASA Astrophysics Data System (ADS)

    Bowles, K. M.; Meile, C. D.

    2010-12-01

    Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes can alter endmember characteristics over time, we investigate the effect of early diagenesis on chosen parameters, an analysis that entails an assessment of the organic matter age distribution. Thus, estimates of the relative contributions of phytoplankton, C3 and C4 plants to bulk sediment organic matter depend not only on environmental characteristics that impact reactivity, but also on sediment mixing processes.

  3. Towards a comprehensive knowledge of the open cluster Haffner 9

    NASA Astrophysics Data System (ADS)

    Piatti, Andrés E.

    2017-03-01

    We turn our attention to Haffner 9, a Milky Way open cluster whose previous fundamental parameter estimates are far from being in agreement. In order to provide with accurate estimates, we present high-quality Washington CT1 and Johnson BVI photometry of the cluster field. We put particular care in statistically cleaning the colour-magnitude diagrams (CMDs) from field star contamination, which was found a common source in previous works for the discordant fundamental parameter estimates. The resulting cluster CMD fiducial features were confirmed from a proper motion membership analysis. Haffner 9 is a moderately young object (age ∼350 Myr), placed in the Perseus arm - at a heliocentric distance of ∼3.2 kpc - , with a lower limit for its present mass of ∼160 M⊙ and of nearly metal solar content. The combination of the cluster structural and fundamental parameters suggest that it is in an advanced stage of internal dynamical evolution, possibly in the phase typical of those with mass segregation in their core regions. However, the cluster still keeps its mass function close to that of the Salpeter's law.

  4. Trajectory Reconstruction and Uncertainty Analysis Using Mars Science Laboratory Pre-Flight Scale Model Aeroballistic Testing

    NASA Technical Reports Server (NTRS)

    Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark

    2013-01-01

    As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.

  5. Impact of Next-to-Leading Order Contributions to Cosmic Microwave Background Lensing.

    PubMed

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2017-05-26

    In this Letter we study the impact on cosmological parameter estimation, from present and future surveys, due to lensing corrections on cosmic microwave background temperature and polarization anisotropies beyond leading order. In particular, we show how post-Born corrections, large-scale structure effects, and the correction due to the change in the polarization direction between the emission at the source and the detection at the observer are non-negligible in the determination of the polarization spectra. They have to be taken into account for an accurate estimation of cosmological parameters sensitive to or even based on these spectra. We study in detail the impact of higher order lensing on the determination of the tensor-to-scalar ratio r and on the estimation of the effective number of relativistic species N_{eff}. We find that neglecting higher order lensing terms can lead to misinterpreting these corrections as a primordial tensor-to-scalar ratio of about O(10^{-3}). Furthermore, it leads to a shift of the parameter N_{eff} by nearly 2σ considering the level of accuracy aimed by future S4 surveys.

  6. OVoG Inversion for the Retrieval of Agricultural Crop Structure by Means of Multi-Baseline Polarimetric SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Pichierri, Manuele; Hajnsek, Irena

    2015-04-01

    In this work, the potential of multi-baseline Pol-InSAR for crop parameter estimation (e.g. crop height and extinction coefficients) is explored. For this reason, a novel Oriented Volume over Ground (OVoG) inversion scheme is developed, which makes use of multi-baseline observables to estimate the whole stack of model parameters. The proposed algorithm has been initially validated on a set of randomly-generated OVoG scenarios, to assess its stability over crop structure changes and its robustness against volume decorrelation and other decorrelation sources. Then, it has been applied to a collection of multi-baseline repeat-pass SAR data, acquired over a rural area in Germany by DLR's F-SAR.

  7. Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity

    NASA Astrophysics Data System (ADS)

    Jian, P.; Hung, S.; Tseng, T.

    2013-12-01

    Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid positions is implemented and inspected for improving source parameter determination of regional seismicity in Taiwan. Synthetic inversion tests demonstrate the resolved moment tensors would better match the hypothetical CMT solutions, and tend to suppress unreal non-double-couple components and reduce the trade-off between focal mechanism and centroid depth if individual signal-to-noise ratios and correlation lengths for 3-component seismograms at each station and mislocation uncertainties are properly taken into account. We further testify the capability of our scheme in retrieving the robust CMT information for mid-sized (Mw~3.5) and offshore earthquakes in Taiwan, which offers immediate and broad applications in detailed modelling of regional stress field and deformation pattern and mapping of subsurface velocity structures.

  8. Forward and Inverse Modeling of Near-Field Seismic Waveforms from Underground Nuclear Explosions for Effective Source Functions and Structure Parameters.

    DTIC Science & Technology

    1987-04-05

    IP o , I-S " M4.7 :" * AMIWILTON & U, .-- EALY(I969) : o H CARROLL(1966) HADLEY (19811 C . Figure 2. P and S-wave velocity structure for Pahute Mesa...8217; 0 .02 s wh ilIe S -. cI by C ) >, s) thIe kta i Is o f t he wav e for:7s are quite well modeled bot h ir tr~~e inversion nd in tefrad mod e Iin~ indi...ESTIMATION 7-Te source parameters determined through waveform inversion for the fo: s o r c ri i c e h v h s~ ahute Mesa events studied are sum.:rarited in

  9. On Using Intensity Interferometry for Feature Identification and Imaging of Remote Objects

    NASA Technical Reports Server (NTRS)

    Erkmen, Baris I.; Strekalov, Dmitry V.; Yu, Nan

    2013-01-01

    We derive an approximation to the intensity covariance function of two scanning pinhole detectors, facing a distant source (e.g., a star) being occluded partially by an absorptive object (e.g., a planet). We focus on using this technique to identify or image an object that is in the line-of-sight between a well-characterized source and the detectors. We derive the observed perturbation to the intensity covariance map due to the object, showing that under some reasonable approximations it is proportional to the real part of the Fourier transform of the source's photon-flux density times the Fourier transform of the object's intensity absorption. We highlight the key parameters impacting its visibility and discuss the requirements for estimating object-related parameters, e.g., its size, velocity or shape. We consider an application of this result to determining the orbit inclination of an exoplanet orbiting a distant star. Finally, motivated by the intrinsically weak nature of the signature, we study its signal-to-noise ratio and determine the impact of system parameters.

  10. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  11. SiGN-SSM: open source parallel software for estimating gene networks with state space models.

    PubMed

    Tamada, Yoshinori; Yamaguchi, Rui; Imoto, Seiya; Hirose, Osamu; Yoshida, Ryo; Nagasaki, Masao; Miyano, Satoru

    2011-04-15

    SiGN-SSM is an open-source gene network estimation software able to run in parallel on PCs and massively parallel supercomputers. The software estimates a state space model (SSM), that is a statistical dynamic model suitable for analyzing short time and/or replicated time series gene expression profiles. SiGN-SSM implements a novel parameter constraint effective to stabilize the estimated models. Also, by using a supercomputer, it is able to determine the gene network structure by a statistical permutation test in a practical time. SiGN-SSM is applicable not only to analyzing temporal regulatory dependencies between genes, but also to extracting the differentially regulated genes from time series expression profiles. SiGN-SSM is distributed under GNU Affero General Public Licence (GNU AGPL) version 3 and can be downloaded at http://sign.hgc.jp/signssm/. The pre-compiled binaries for some architectures are available in addition to the source code. The pre-installed binaries are also available on the Human Genome Center supercomputer system. The online manual and the supplementary information of SiGN-SSM is available on our web site. tamada@ims.u-tokyo.ac.jp.

  12. A hierarchical modeling approach to estimate regional acute health effects of particulate matter sources

    PubMed Central

    Krall, J. R.; Hackstadt, A. J.; Peng, R. D.

    2017-01-01

    Exposure to particulate matter (PM) air pollution has been associated with a range of adverse health outcomes, including cardiovascular disease (CVD) hospitalizations and other clinical parameters. Determining which sources of PM, such as traffic or industry, are most associated with adverse health outcomes could help guide future recommendations aimed at reducing harmful pollution exposure for susceptible individuals. Information obtained from multisite studies, which is generally more precise than information from a single location, is critical to understanding how PM impacts health and to informing local strategies for reducing individual-level PM exposure. However, few methods exist to perform multisite studies of PM sources, which are not generally directly observed, and adverse health outcomes. We developed SHARE, a hierarchical modeling approach that facilitates reproducible, multisite epidemiologic studies of PM sources. SHARE is a two-stage approach that first summarizes information about PM sources across multiple sites. Then, this information is used to determine how community-level (i.e. county- or city-level) health effects of PM sources should be pooled to estimate regional-level health effects. SHARE is a type of population value decomposition that aims to separate out regional-level features from site-level data. Unlike previous approaches for multisite epidemiologic studies of PM sources, the SHARE approach allows the specific PM sources identified to vary by site. Using data from 2000–2010 for 63 northeastern US counties, we estimated regional-level health effects associated with short-term exposure to major types of PM sources. We found PM from secondary sulfate, traffic, and metals sources was most associated with CVD hospitalizations. PMID:28098412

  13. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  14. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng Jinchao; Qin Chenghu; Jia Kebin

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescentmore » photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used rather than monochromatic data. Furthermore, the study conducted using an adaptive regularization parameter demonstrated our ability to accurately localize the bioluminescent source. With the adaptively estimated regularization parameter, the reconstructed center position of the source was (20.37, 31.05, 12.95) mm, and the distance to the real source was 0.63 mm. The results of the dual-source experiments further showed that our algorithm could localize the bioluminescent sources accurately. The authors then presented experimental evidence that the proposed algorithm exhibited its calculated efficiency over the heuristic method. The effectiveness of the new algorithm was also confirmed by comparing it with the L-curve method. Furthermore, various initial speculations regarding the regularization parameter were used to illustrate the convergence of our algorithm. Finally, in vivo mouse experiment further illustrates the effectiveness of the proposed algorithm. Conclusions: Utilizing numerical, physical phantom and in vivo examples, we demonstrated that the bioluminescent sources could be reconstructed accurately with automatic regularization parameters. The proposed algorithm exhibited superior performance than both the heuristic regularization parameter choice method and L-curve method based on the computational speed and localization error.« less

  15. Preparation of AgInS2 nanoparticles by a facile microwave heating technique; study of effective parameters, optical and photovoltaic characteristics

    NASA Astrophysics Data System (ADS)

    Tadjarodi, Azadeh; Cheshmekhavar, Amir Hossein; Imani, Mina

    2012-12-01

    In this work, AgInS2 (AIS) semiconductor nanoparticles were synthesized by an efficient and facile microwave heating technique using several sulfur sources and solvents in the different reaction times. The SEM images presented the particle morphology for all of the obtained products in the arranged reaction conditions. The particle size of 70 nm was obtained using thioacetamide (TAA), ethylene glycol (EG) as the sulfur source and solvent, respectively at the reaction time of 5 min. It was found that the change of the mentioned parameters lead to alter on the particle size of the resulting products. The average particle size was estimated using a microstructure measurement program and Minitab statistical software. The optical band gap energy of 1.96 eV for the synthesized AIS nanoparticles was determined by the diffuse reflectance spectroscopy (DRS). AgInS2/CdS/CuInSe2 heterojunction solar cell was constructed and photovoltaic parameters, i.e., open-circuit voltage (Voc), short-circuit current (Jsc) and fill factor (FF) were estimated by photocurrent-voltage (I-V) curve. The calculated fill factor of 30% and energy conversion efficiency of 1.58% revealed the capability of AIS nanoparticles to use in the solar cell devices.

  16. Novel scheme for rapid parallel parameter estimation of gravitational waves from compact binary coalescences

    NASA Astrophysics Data System (ADS)

    Pankow, C.; Brady, P.; Ochsner, E.; O'Shaughnessy, R.

    2015-07-01

    We introduce a highly parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g., masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g., distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of ≲5 %, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly unchanged strategy could estimate neutron star (NS)-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.

  17. W-phase estimation of first-order rupture distribution for megathrust earthquakes

    NASA Astrophysics Data System (ADS)

    Benavente, Roberto; Cummins, Phil; Dettmer, Jan

    2014-05-01

    Estimating the rupture pattern for large earthquakes during the first hour after the origin time can be crucial for rapid impact assessment and tsunami warning. However, the estimation of coseismic slip distribution models generally involves complex methodologies that are difficult to implement rapidly. Further, while model parameter uncertainty can be crucial for meaningful estimation, they are often ignored. In this work we develop a finite fault inversion for megathrust earthquakes which rapidly generates good first order estimates and uncertainties of spatial slip distributions. The algorithm uses W-phase waveforms and a linear automated regularization approach to invert for rupture models of some recent megathrust earthquakes. The W phase is a long period (100-1000 s) wave which arrives together with the P wave. Because it is fast, has small amplitude and a long-period character, the W phase is regularly used to estimate point source moment tensors by the NEIC and PTWC, among others, within an hour of earthquake occurrence. We use W-phase waveforms processed in a manner similar to that used for such point-source solutions. The inversion makes use of 3 component W-phase records retrieved from the Global Seismic Network. The inverse problem is formulated by a multiple time window method, resulting in a linear over-parametrized problem. The over-parametrization is addressed by Tikhonov regularization and regularization parameters are chosen according to the discrepancy principle by grid search. Noise on the data is addressed by estimating the data covariance matrix from data residuals. The matrix is obtained by starting with an a priori covariance matrix and then iteratively updating the matrix based on the residual errors of consecutive inversions. Then, a covariance matrix for the parameters is computed using a Bayesian approach. The application of this approach to recent megathrust earthquakes produces models which capture the most significant features of their slip distributions. Also, reliable solutions are generally obtained with data in a 30-minute window following the origin time, suggesting that a real-time system could obtain solutions in less than one hour following the origin time.

  18. Using Geographic Information Systems and Spatial Analysis Methods to Assess Household Water Access and Sanitation Coverage in the SHINE Trial.

    PubMed

    Ntozini, Robert; Marks, Sara J; Mangwadu, Goldberg; Mbuya, Mduduzi N N; Gerema, Grace; Mutasa, Batsirai; Julian, Timothy R; Schwab, Kellogg J; Humphrey, Jean H; Zungu, Lindiwe I

    2015-12-15

    Access to water and sanitation are important determinants of behavioral responses to hygiene and sanitation interventions. We estimated cluster-specific water access and sanitation coverage to inform a constrained randomization technique in the SHINE trial. Technicians and engineers inspected all public access water sources to ascertain seasonality, function, and geospatial coordinates. Households and water sources were mapped using open-source geospatial software. The distance from each household to the nearest perennial, functional, protected water source was calculated, and for each cluster, the median distance and the proportion of households within <500 m and >1500 m of such a water source. Cluster-specific sanitation coverage was ascertained using a random sample of 13 households per cluster. These parameters were included as covariates in randomization to optimize balance in water and sanitation access across treatment arms at the start of the trial. The observed high variability between clusters in both parameters suggests that constraining on these factors was needed to reduce risk of bias. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.

  19. Broadband spectral fitting of blazars using XSPEC

    NASA Astrophysics Data System (ADS)

    Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev

    2018-03-01

    The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.

  20. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  1. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  2. The critical role of uncertainty in projections of hydrological extremes

    NASA Astrophysics Data System (ADS)

    Meresa, Hadush K.; Romanowicz, Renata J.

    2017-08-01

    This paper aims to quantify the uncertainty in projections of future hydrological extremes in the Biala Tarnowska River at Koszyce gauging station, south Poland. The approach followed is based on several climate projections obtained from the EURO-CORDEX initiative, raw and bias-corrected realizations of catchment precipitation, and flow simulations derived using multiple hydrological model parameter sets. The projections cover the 21st century. Three sources of uncertainty are considered: one related to climate projection ensemble spread, the second related to the uncertainty in hydrological model parameters and the third related to the error in fitting theoretical distribution models to annual extreme flow series. The uncertainty of projected extreme indices related to hydrological model parameters was conditioned on flow observations from the reference period using the generalized likelihood uncertainty estimation (GLUE) approach, with separate criteria for high- and low-flow extremes. Extreme (low and high) flow quantiles were estimated using the generalized extreme value (GEV) distribution at different return periods and were based on two different lengths of the flow time series. A sensitivity analysis based on the analysis of variance (ANOVA) shows that the uncertainty introduced by the hydrological model parameters can be larger than the climate model variability and the distribution fit uncertainty for the low-flow extremes whilst for the high-flow extremes higher uncertainty is observed from climate models than from hydrological parameter and distribution fit uncertainties. This implies that ignoring one of the three uncertainty sources may cause great risk to future hydrological extreme adaptations and water resource planning and management.

  3. Assessment of SMOS Soil Moisture Retrieval Parameters Using Tau-Omega Algorithms for Soil Moisture Deficit Estimation

    NASA Technical Reports Server (NTRS)

    Srivastava, Prashant K.; Han, Dawei; Rico-Ramirez, Miguel A.; O'Neill, Peggy; Islam, Tanvir; Gupta, Manika

    2014-01-01

    Soil Moisture and Ocean Salinity (SMOS) is the latest mission which provides flow of coarse resolution soil moisture data for land applications. However, the efficient retrieval of soil moisture for hydrological applications depends on optimally choosing the soil and vegetation parameters. The first stage of this work involves the evaluation of SMOS Level 2 products and then several approaches for soil moisture retrieval from SMOS brightness temperature are performed to estimate Soil Moisture Deficit (SMD). The most widely applied algorithm i.e. Single channel algorithm (SCA), based on tau-omega is used in this study for the soil moisture retrieval. In tau-omega, the soil moisture is retrieved using the Horizontal (H) polarisation following Hallikainen dielectric model, roughness parameters, Fresnel's equation and estimated Vegetation Optical Depth (tau). The roughness parameters are empirically calibrated using the numerical optimization techniques. Further to explore the improvement in retrieval models, modifications have been incorporated in the algorithms with respect to the sources of the parameters, which include effective temperatures derived from the European Center for Medium-Range Weather Forecasts (ECMWF) downscaled using the Weather Research and Forecasting (WRF)-NOAH Land Surface Model and Moderate Resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) while the s is derived from MODIS Leaf Area Index (LAI). All the evaluations are performed against SMD, which is estimated using the Probability Distributed Model following a careful calibration and validation integrated with sensitivity and uncertainty analysis. The performance obtained after all those changes indicate that SCA-H using WRF-NOAH LSM downscaled ECMWF LST produces an improved performance for SMD estimation at a catchment scale.

  4. Final STS-11 (41-B) best estimate trajectory products: Development and results from the first Cape landing

    NASA Technical Reports Server (NTRS)

    Kelly, G. M.; Mcconnell, J. G.; Findlay, J. T.; Heck, M. L.; Henry, M. W.

    1984-01-01

    The STS-11 (41-B) postflight data processing is completed and the results published. The final reconstructed entry trajectory is presented. The various atmospheric sources available for this flight are discussed. Aerodynamic Best Estimate of Trajectory BET generation and plots from this file are presented. A definition of the major maneuvers effected is given. Physical constants, including spacecraft mass properties; final residuals from the reconstruction process; trajectory parameter listings; and an archival section are included.

  5. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  6. An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, with Application to WASP-12b

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio; Loredo, Thomas J.; Bowman, M. Oliver; Foster, Andrew S. D.; Stemm, Madison M.; Lust, Nate B.

    2015-01-01

    Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  7. An Open-Source Bayesian Atmospheric Radiative Transfer (BART) Code, and Application to WASP-12b

    NASA Astrophysics Data System (ADS)

    Harrington, Joseph; Blecic, Jasmina; Cubillos, Patricio; Rojo, Patricio M.; Loredo, Thomas J.; Bowman, Matthew O.; Foster, Andrew S.; Stemm, Madison M.; Lust, Nate B.

    2014-11-01

    Atmospheric retrievals for solar-system planets typically fit, either with a minimizer or by eye, a synthetic spectrum to high-resolution (Δλ/λ ~ 1000-100,000) data with S/N > 100 per point. In contrast, exoplanet data often have S/N ~ 10 per point, and may have just a few points representing bandpasses larger than 1 um. To derive atmospheric constraints and robust parameter uncertainty estimates from such data requires a Bayesian approach. To date there are few investigators with the relevant codes, none of which are publicly available. We are therefore pleased to announce the open-source Bayesian Atmospheric Radiative Transfer (BART) code. BART uses a Bayesian phase-space explorer to drive a radiative-transfer model through the parameter phase space, producing the most robust estimates available for the thermal profile and chemical abundances in the atmosphere. We present an overview of the code and an initial application to Spitzer eclipse data for WASP-12b. We invite the community to use and improve BART via the open-source development site GitHub.com. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  8. Analytical Models of the Transport of Deep-Well Injectate at the North District Wastewater Treatment Plant, Miami-Dade County, Florida, U.S.A

    NASA Astrophysics Data System (ADS)

    King, J. N.; Walsh, V.; Cunningham, K. J.; Evans, F. S.; Langevin, C. D.; Dausman, A.

    2009-12-01

    The Miami-Dade Water and Sewer Department (MDWASD) injects buoyant effluent from the North District Wastewater Treatment Plant (NDWWTP) through four Class I injection wells into the Boulder Zone---a saline (35 parts per thousand) and transmissive (105 to 106 square meters per day) hydrogeologic unit located approximately 1000 meters below land surface. Miami-Dade County is located in southeast Florida, U.S.A. Portions of the Floridan and Biscayne aquifers are located above the Boulder Zone. The Floridan and Biscayne aquifers---underground sources of drinking water---are protected by U.S. Federal Laws and Regulations, Florida Statutes, and Miami-Dade County ordinances. In 1998, MDWASD began to observe effluent constituents within the Floridan aquifer. Continuous-source and impulse-source analytical models for advective and diffusive transport of effluent are used in the present work to test contaminant flow-path hypotheses, suggest transport mechanisms, and estimate dispersivity. MDWASD collected data in the Floridan aquifer between 1996 and 2007. A parameter estimation code is used to optimize analytical model parameters by fitting model data to collected data. These simple models will be used to develop conceptual and numerical models of effluent transport at the NDWWTP, and in the vicinity of the NDWWTP.

  9. Parameter estimation in Probabilistic Seismic Hazard Analysis: current problems and some solutions

    NASA Astrophysics Data System (ADS)

    Vermeulen, Petrus

    2017-04-01

    A typical Probabilistic Seismic Hazard Analysis (PSHA) comprises identification of seismic source zones, determination of hazard parameters for these zones, selection of an appropriate ground motion prediction equation (GMPE), and integration over probabilities according the Cornell-McGuire procedure. Determination of hazard parameters often does not receive the attention it deserves, and, therefore, problems therein are often overlooked. Here, many of these problems are identified, and some of them addressed. The parameters that need to be identified are those associated with the frequency-magnitude law, those associated with earthquake recurrence law in time, and the parameters controlling the GMPE. This study is concerned with the frequency-magnitude law and temporal distribution of earthquakes, and not with GMPEs. TheGutenberg-Richter frequency-magnitude law is usually adopted for the frequency-magnitude law, and a Poisson process for earthquake recurrence in time. Accordingly, the parameters that need to be determined are the slope parameter of the Gutenberg-Richter frequency-magnitude law, i.e. the b-value, the maximum value at which the Gutenberg-Richter law applies mmax, and the mean recurrence frequency,λ, of earthquakes. If, instead of the Cornell-McGuire, the "Parametric-Historic procedure" is used, these parameters do not have to be known before the PSHA computations, they are estimated directly during the PSHA computation. The resulting relation for the frequency of ground motion vibration parameters has an analogous functional form to the frequency-magnitude law, which is described by parameters γ (analogous to the b¬-value of the Gutenberg-Richter law) and the maximum possible ground motion amax (analogous to mmax). Originally, the approach was possible to apply only to the simple GMPE, however, recently a method was extended to incorporate more complex forms of GMPE's. With regards to the parameter mmax, there are numerous methods of estimation, none of which is accepted as the standard one. There is also much controversy surrounding this parameter. In practice, when estimating the above mentioned parameters from seismic catalogue, the magnitude, mmin, from which a seismic catalogue is complete becomes important.Thus, the parameter mmin is also considered as a parameter to be estimated in practice. Several methods are discussed in the literature, and no specific method is preferred. Methods usually aim at identifying the point where a frequency-magnitude plot starts to deviate from linearity due to data loss. Parameter estimation is clearly a rich field which deserves much attention and, possibly standardization, of methods. These methods should be the sound and efficient, and a query into which methods are to be used - and for that matter which ones are not to be used - is in order.

  10. Nighttime image dehazing using local atmospheric selection rule and weighted entropy for visible-light systems

    NASA Astrophysics Data System (ADS)

    Park, Dubok; Han, David K.; Ko, Hanseok

    2017-05-01

    Optical imaging systems are often degraded by scattering due to atmospheric particles, such as haze, fog, and mist. Imaging under nighttime haze conditions may suffer especially from the glows near active light sources as well as scattering. We present a methodology for nighttime image dehazing based on an optical imaging model which accounts for varying light sources and their glow. First, glow effects are decomposed using relative smoothness. Atmospheric light is then estimated by assessing global and local atmospheric light using a local atmospheric selection rule. The transmission of light is then estimated by maximizing an objective function designed on the basis of weighted entropy. Finally, haze is removed using two estimated parameters, namely, atmospheric light and transmission. The visual and quantitative comparison of the experimental results with the results of existing state-of-the-art methods demonstrates the significance of the proposed approach.

  11. A resolution measure for three-dimensional microscopy

    PubMed Central

    Chao, Jerry; Ram, Sripad; Abraham, Anish V.; Ward, E. Sally; Ober, Raimund J.

    2009-01-01

    A three-dimensional (3D) resolution measure for the conventional optical microscope is introduced which overcomes the drawbacks of the classical 3D (axial) resolution limit. Formulated within the context of a parameter estimation problem and based on the Cramer-Rao lower bound, this 3D resolution measure indicates the accuracy with which a given distance between two objects in 3D space can be determined from the acquired image. It predicts that, given enough photons from the objects of interest, arbitrarily small distances of separation can be estimated with prespecified accuracy. Using simulated images of point source pairs, we show that the maximum likelihood estimator is capable of attaining the accuracy predicted by the resolution measure. We also demonstrate how different factors, such as extraneous noise sources and the spatial orientation of the imaged object pair, can affect the accuracy with which a given distance of separation can be determined. PMID:20161040

  12. Parameter and input data uncertainty estimation for the assessment of water resources in two sub-basins of the Limpopo River Basin

    NASA Astrophysics Data System (ADS)

    Oosthuizen, Nadia; Hughes, Denis A.; Kapangaziwiri, Evison; Mwenge Kahinda, Jean-Marc; Mvandaba, Vuyelwa

    2018-05-01

    The demand for water resources is rapidly growing, placing more strain on access to water and its management. In order to appropriately manage water resources, there is a need to accurately quantify available water resources. Unfortunately, the data required for such assessment are frequently far from sufficient in terms of availability and quality, especially in southern Africa. In this study, the uncertainty related to the estimation of water resources of two sub-basins of the Limpopo River Basin - the Mogalakwena in South Africa and the Shashe shared between Botswana and Zimbabwe - is assessed. Input data (and model parameters) are significant sources of uncertainty that should be quantified. In southern Africa water use data are among the most unreliable sources of model input data because available databases generally consist of only licensed information and actual use is generally unknown. The study assesses how these uncertainties impact the estimation of surface water resources of the sub-basins. Data on farm reservoirs and irrigated areas from various sources were collected and used to run the model. Many farm dams and large irrigation areas are located in the upper parts of the Mogalakwena sub-basin. Results indicate that water use uncertainty is small. Nevertheless, the medium to low flows are clearly impacted. The simulated mean monthly flows at the outlet of the Mogalakwena sub-basin were between 22.62 and 24.68 Mm3 per month when incorporating only the uncertainty related to the main physical runoff generating parameters. The range of total predictive uncertainty of the model increased to between 22.15 and 24.99 Mm3 when water use data such as small farm and large reservoirs and irrigation were included. For the Shashe sub-basin incorporating only uncertainty related to the main runoff parameters resulted in mean monthly flows between 11.66 and 14.54 Mm3. The range of predictive uncertainty changed to between 11.66 and 17.72 Mm3 after the uncertainty in water use information was added.

  13. Inverse modelling-based reconstruction of the Chernobyl source term available for long-range transport

    NASA Astrophysics Data System (ADS)

    Davoine, X.; Bocquet, M.

    2007-03-01

    The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).

  14. Engineering applications of strong ground motion simulation

    NASA Astrophysics Data System (ADS)

    Somerville, Paul

    1993-02-01

    The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.

  15. Source parameter estimates of echolocation clicks from wild pygmy killer whales (Feresa attenuata) (L)

    NASA Astrophysics Data System (ADS)

    Madsen, P. T.; Kerr, I.; Payne, R.

    2004-10-01

    Pods of the little known pygmy killer whale (Feresa attenuata) in the northern Indian Ocean were recorded with a vertical hydrophone array connected to a digital recorder sampling at 320 kHz. Recorded clicks were directional, short (25 μs) transients with estimated source levels between 197 and 223 dB re. 1 μPa (pp). Spectra of clicks recorded close to or on the acoustic axis were bimodal with peak frequencies between 45 and 117 kHz, and with centroid frequencies between 70 and 85 kHz. The clicks share characteristics of echolocation clicks from similar sized, whistling delphinids, and have properties suited for the detection and classification of prey targeted by this odontocete. .

  16. Sourcing for Parameter Estimation and Study of Logistic Differential Equation

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This article offers modelling opportunities in which the phenomena of the spread of disease, perception of changing mass, growth of technology, and dissemination of information can be described by one differential equation--the logistic differential equation. It presents two simulation activities for students to generate real data, as well as…

  17. Assimilating Leaf Area Index Estimates from Remote Sensing into the Simulations of a Cropping Systems Model

    USDA-ARS?s Scientific Manuscript database

    Spatial extrapolation of cropping systems models for regional crop growth and water use assessment and farm-level precision management has been limited by the vast model input requirements and the model sensitivity to parameter uncertainty. Remote sensing has been proposed as a viable source of spat...

  18. Games between stakeholders and the payment for ecological services: evidence from the Wuxijiang River reservoir area in China

    PubMed Central

    2018-01-01

    A gambling or “game” phenomenon can be observed in the complex relationship between sources and receptors of ecological compensation among multiple stakeholders. This paper investigates the problem of gambling to determine payment amounts, and details a method to estimate the ecological compensation amount related to water resources in the Wuxijiang River reservoir area in China. Public statistics and first-hand data obtained from a field investigation were used as data sources. Estimation of the source and receptor amount of ecological compensation relevant to the water resource being investigated was achieved using the contingent valuation method (CVM). The ecological compensation object and its benefit and gambling for the Wuxijiang River water source area are also analyzed in this paper. According to the results of a CVM survey, the ecological compensation standard for the Wuxijiang River was determined by the CVM, and the amount of compensation was estimated. Fifteen blocks downstream of the Wuxijiang River and 12 blocks in the water source area were used as samples to administer a survey that estimated the willingness to pay (WTP) and the willingness to accept (WTA) the ecological compensation of Wuxijiang River for both nonparametric and parametric estimation. Finally, the theoretical value of the ecological compensation amount was estimated. Without taking other factors into account, the WTP of residents in the Wuxi River water source was 297.48 yuan per year, while the WTAs were 3864.48 yuan per year. The theoretical standard of ecological compensation is 2294.39–2993.81 yuan per year. Under the parameter estimation of other factors, the WTP of residents in the Wuxi River water source area was 528.72 yuan per year, while the WTA was 1514.04 yuan per year. The theoretical standard of ecological compensation is 4076.25–5434.99 yuan per year. The main factors influencing the WTP ecological compensation in the Wuxi River basin are annual income and age. The main factors affecting WTA are gender and attention to the environment, age, marital status, local birth, and location in the main village. PMID:29568707

  19. Games between stakeholders and the payment for ecological services: evidence from the Wuxijiang River reservoir area in China.

    PubMed

    Shu, Lin

    2018-01-01

    A gambling or "game" phenomenon can be observed in the complex relationship between sources and receptors of ecological compensation among multiple stakeholders. This paper investigates the problem of gambling to determine payment amounts, and details a method to estimate the ecological compensation amount related to water resources in the Wuxijiang River reservoir area in China. Public statistics and first-hand data obtained from a field investigation were used as data sources. Estimation of the source and receptor amount of ecological compensation relevant to the water resource being investigated was achieved using the contingent valuation method (CVM). The ecological compensation object and its benefit and gambling for the Wuxijiang River water source area are also analyzed in this paper. According to the results of a CVM survey, the ecological compensation standard for the Wuxijiang River was determined by the CVM, and the amount of compensation was estimated. Fifteen blocks downstream of the Wuxijiang River and 12 blocks in the water source area were used as samples to administer a survey that estimated the willingness to pay (WTP) and the willingness to accept (WTA) the ecological compensation of Wuxijiang River for both nonparametric and parametric estimation. Finally, the theoretical value of the ecological compensation amount was estimated. Without taking other factors into account, the WTP of residents in the Wuxi River water source was 297.48 yuan per year, while the WTAs were 3864.48 yuan per year. The theoretical standard of ecological compensation is 2294.39-2993.81 yuan per year. Under the parameter estimation of other factors, the WTP of residents in the Wuxi River water source area was 528.72 yuan per year, while the WTA was 1514.04 yuan per year. The theoretical standard of ecological compensation is 4076.25-5434.99 yuan per year. The main factors influencing the WTP ecological compensation in the Wuxi River basin are annual income and age. The main factors affecting WTA are gender and attention to the environment, age, marital status, local birth, and location in the main village.

  20. Mathematical model for Trametes versicolor growth in submerged cultivation.

    PubMed

    Tisma, Marina; Sudar, Martina; Vasić-Racki, Durda; Zelić, Bruno

    2010-08-01

    Trametes versicolor is a white-rot fungus known as a producer of extracellular enzymes such as laccase, manganese-peroxidase, and lignin-peroxidase. The production of these enzymes requires detailed knowledge of the growth characteristics and physiology of the fungus. Submerged cultivations of T. versicolor on glucose, fructose, and sucrose as sole carbon sources were performed in shake flasks. Sucrose hydrolysis catalyzed by the whole cells of T. versicolor was considered as one-step enzymatic reaction described with Michaelis-Menten kinetics. Kinetic parameters of invertase-catalyzed sucrose hydrolysis were estimated (K (m) = 7.99 g dm(-3) and V (m) = 0.304 h(-1)). Monod model was used for description of kinetics of T. versicolor growth on glucose and fructose as sole carbon sources. Growth associated model parameters were estimated from the experimental results obtained by independent experiments (mu(G)(max) = 0.14 h(-1), K(G)(S) = 8.06 g dm(-3), mu(F)(max) = 0.37 h(-1) and K(F)(S) = 54.8 g dm(-3)). Developed mathematical model is in good agreement with the experimental results.

  1. Software Toolbox Development for Rapid Earthquake Source Optimisation Combining InSAR Data and Seismic Waveforms

    NASA Astrophysics Data System (ADS)

    Isken, Marius P.; Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Bathke, Hannes M.

    2017-04-01

    We present a modular open-source software framework (pyrocko, kite, grond; http://pyrocko.org) for rapid InSAR data post-processing and modelling of tectonic and volcanic displacement fields derived from satellite data. Our aim is to ease and streamline the joint optimisation of earthquake observations from InSAR and GPS data together with seismological waveforms for an improved estimation of the ruptures' parameters. Through this approach we can provide finite models of earthquake ruptures and therefore contribute to a timely and better understanding of earthquake kinematics. The new kite module enables a fast processing of unwrapped InSAR scenes for source modelling: the spatial sub-sampling and data error/noise estimation for the interferogram is evaluated automatically and interactively. The rupture's near-field surface displacement data are then combined with seismic far-field waveforms and jointly modelled using the pyrocko.gf framwork, which allows for fast forward modelling based on pre-calculated elastodynamic and elastostatic Green's functions. Lastly the grond module supplies a bootstrap-based probabilistic (Monte Carlo) joint optimisation to estimate the parameters and uncertainties of a finite-source earthquake rupture model. We describe the developed and applied methods as an effort to establish a semi-automatic processing and modelling chain. The framework is applied to Sentinel-1 data from the 2016 Central Italy earthquake sequence, where we present the earthquake mechanism and rupture model from which we derive regions of increased coulomb stress. The open source software framework is developed at GFZ Potsdam and at the University of Kiel, Germany, it is written in Python and C programming languages. The toolbox architecture is modular and independent, and can be utilized flexibly for a variety of geophysical problems. This work is conducted within the BridGeS project (http://www.bridges.uni-kiel.de) funded by the German Research Foundation DFG through an Emmy-Noether grant.

  2. Truncated RAP-MUSIC (TRAP-MUSIC) for MEG and EEG source localization.

    PubMed

    Mäkelä, Niko; Stenroos, Matti; Sarvas, Jukka; Ilmoniemi, Risto J

    2018-02-15

    Electrically active brain regions can be located applying MUltiple SIgnal Classification (MUSIC) on magneto- or electroencephalographic (MEG; EEG) data. We introduce a new MUSIC method, called truncated recursively-applied-and-projected MUSIC (TRAP-MUSIC). It corrects a hidden deficiency of the conventional RAP-MUSIC algorithm, which prevents estimation of the true number of brain-signal sources accurately. The correction is done by applying a sequential dimension reduction to the signal-subspace projection. We show that TRAP-MUSIC significantly improves the performance of MUSIC-type localization; in particular, it successfully and robustly locates active brain regions and estimates their number. We compare TRAP-MUSIC and RAP-MUSIC in simulations with varying key parameters, e.g., signal-to-noise ratio, correlation between source time-courses, and initial estimate for the dimension of the signal space. In addition, we validate TRAP-MUSIC with measured MEG data. We suggest that with the proposed TRAP-MUSIC method, MUSIC-type localization could become more reliable and suitable for various online and offline MEG and EEG applications. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Investigating scintillometer source areas

    NASA Astrophysics Data System (ADS)

    Perelet, A. O.; Ward, H. C.; Pardyjak, E.

    2017-12-01

    Scintillometry is an indirect ground-based method for measuring line-averaged surface heat and moisture fluxes on length scales of 0.5 - 10 km. These length scales are relevant to urban and other complex areas where setting up traditional instrumentation like eddy covariance is logistically difficult. In order to take full advantage of scintillometry, a better understanding of the flux source area is needed. The source area for a scintillometer is typically calculated as a convolution of point sources along the path. A weighting function is then applied along the path to compensate for a total signal contribution that is biased towards the center of the beam path, and decreasing near the beam ends. While this method of calculating the source area provides an estimate of the contribution of the total flux along the beam, there are still questions regarding the physical meaning of the weighted source area. These questions are addressed using data from an idealized experiment near the Salt Lake City International Airport in northern Utah, U.S.A. The site is a flat agricultural area consisting of two different land uses. This simple heterogeneity in the land use facilitates hypothesis testing related to source areas. Measurements were made with a two wavelength scintillometer system spanning 740 m along with three standard open-path infrared gas analyzer-based eddy-covariance stations along the beam path. This configuration allows for direct observations of fluxes along the beam and comparisons to the scintillometer average. The scintillometer system employed measures the refractive index structure parameter of air for two wavelengths of electromagnetic radiation, 880 μm and 1.86 cm to simultaneously estimate path-averaged heat and moisture fluxes, respectively. Meteorological structure parameters (CT2, Cq2, and CTq) as well as surface fluxes are compared for various amounts of source area overlap between eddy covariance and scintillometry. Additionally, surface properties from LANDSAT 7 & 8 are used to help understand source area composition for different times throughout the experiment.

  4. Traceable measurements of the electrical parameters of solid-state lighting products

    NASA Astrophysics Data System (ADS)

    Zhao, D.; Rietveld, G.; Braun, J.-P.; Overney, F.; Lippert, T.; Christensen, A.

    2016-12-01

    In order to perform traceable measurements of the electrical parameters of solid-state lighting (SSL) products, it is necessary to technically adequately define the measurement procedures and to identify the relevant uncertainty sources. The present published written standard for SSL products specifies test conditions, but it lacks an explanation of how adequate these test conditions are. More specifically, both an identification of uncertainty sources and a quantitative uncertainty analysis are absent. This paper fills the related gap in the present written standard. New uncertainty sources with respect to conventional lighting sources are determined and their effects are quantified. It shows that for power measurements, the main uncertainty sources are temperature deviation, power supply voltage distortion, and instability of the SSL product. For current RMS measurements, the influence of bandwidth, shunt resistor, power supply source impedance and ac frequency flatness are significant as well. The measurement uncertainty depends not only on the test equipment but is also a function of the characteristics of the device under test (DUT), for example, current harmonics spectrum and input impedance. Therefore, an online calculation tool is provided to help non-electrical experts. Following our procedures, unrealistic uncertainty estimations, unnecessary procedures and expensive equipment can be prevented.

  5. Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations

    NASA Astrophysics Data System (ADS)

    Weng, H.; Yang, H.

    2017-12-01

    Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.

  6. Comparison of actual and seismologically inferred stress drops in dynamic models of microseismicity

    NASA Astrophysics Data System (ADS)

    Lin, Y. Y.; Lapusta, N.

    2017-12-01

    Estimating source parameters for small earthquakes is commonly based on either Brune or Madariaga source models. These models assume circular rupture that starts from the center of a fault and spreads axisymmetrically with a constant rupture speed. The resulting stress drops are moment-independent, with large scatter. However, more complex source behaviors are commonly discovered by finite-fault inversions for both large and small earthquakes, including directivity, heterogeneous slip, and non-circular shapes. Recent studies (Noda, Lapusta, and Kanamori, GJI, 2013; Kaneko and Shearer, GJI, 2014; JGR, 2015) have shown that slip heterogeneity and directivity can result in large discrepancies between the actual and estimated stress drops. We explore the relation between the actual and seismologically estimated stress drops for several types of numerically produced microearthquakes. For example, an asperity-type circular fault patch with increasing normal stress towards the middle of the patch, surrounded by a creeping region, is a potentially common microseismicity source. In such models, a number of events rupture the portion of the patch near its circumference, producing ring-like ruptures, before a patch-spanning event occurs. We calculate the far-field synthetic waveforms for our simulated sources and estimate their spectral properties. The distribution of corner frequencies over the focal sphere is markedly different for the ring-like sources compared to the Madariaga model. Furthermore, most waveforms for the ring-like sources are better fitted by a high-frequency fall-off rate different from the commonly assumed value of 2 (from the so-called omega-squared model), with the average value over the focal sphere being 1.5. The application of Brune- or Madariaga-type analysis to these sources results in the stress drops estimates different from the actual stress drops by a factor of up to 125 in the models we considered. We will report on our current studies of other types of seismic sources, such as repeating earthquakes and foreshock-like events, and whether the potentially realistic and common sources different from the standard Brune and Madariaga models can be identified from their focal spectral signatures and studied using a more tailored seismological analysis.

  7. Parameter Estimation for Gravitational-wave Bursts with the BayesWave Pipeline

    NASA Technical Reports Server (NTRS)

    Becsy, Bence; Raffai, Peter; Cornish, Neil; Essick, Reed; Kanner, Jonah; Katsavounidis, Erik; Littenberg, Tyson B.; Millhouse, Margaret; Vitale, Salvatore

    2017-01-01

    We provide a comprehensive multi-aspect study of the performance of a pipeline used by the LIGO-Virgo Collaboration for estimating parameters of gravitational-wave bursts. We add simulated signals with four different morphologies (sine-Gaussians (SGs), Gaussians, white-noise bursts, and binary black hole signals) to simulated noise samples representing noise of the two Advanced LIGO detectors during their first observing run. We recover them with the BayesWave (BW) pipeline to study its accuracy in sky localization, waveform reconstruction, and estimation of model-independent waveform parameters. BW localizes sources with a level of accuracy comparable for all four morphologies, with the median separation of actual and estimated sky locations ranging from 25.1deg to30.3deg. This is a reasonable accuracy in the two-detector case, and is comparable to accuracies of other localization methods studied previously. As BW reconstructs generic transient signals with SG wavelets, it is unsurprising that BW performs best in reconstructing SG and Gaussian waveforms. The BW accuracy in waveform reconstruction increases steeply with the network signal-to-noise ratio (S/N(sub net), reaching a 85% and 95% match between the reconstructed and actual waveform below S/N(sub net) approx. = 20 and S/N(sub net) approx. = 50, respectively, for all morphologies. The BW accuracy in estimating central moments of waveforms is only limited by statistical errors in the frequency domain, and is also affected by systematic errors in the time domain as BW cannot reconstruct low-amplitude parts of signals that are overwhelmed by noise. The figures of merit we introduce can be used in future characterizations of parameter estimation pipelines.

  8. Improving Wind Predictions in the Marine Atmospheric Boundary Layer Through Parameter Estimation in a Single Column Model

    DOE PAGES

    Lee, Jared A.; Hacker, Joshua P.; Monache, Luca Delle; ...

    2016-08-03

    A current barrier to greater deployment of offshore wind turbines is the poor quality of numerical weather prediction model wind and turbulence forecasts over open ocean. The bulk of development for atmospheric boundary layer (ABL) parameterization schemes has focused on land, partly due to a scarcity of observations over ocean. The 100-m FINO1 tower in the North Sea is one of the few sources worldwide of atmospheric profile observations from the sea surface to turbine hub height. These observations are crucial to developing a better understanding and modeling of physical processes in the marine ABL. In this paper we usemore » the WRF single column model (SCM), coupled with an ensemble Kalman filter from the Data Assimilation Research Testbed (DART), to create 100-member ensembles at the FINO1 location. The goal of this study is to determine the extent to which model parameter estimation can improve offshore wind forecasts. Combining two datasets that provide lateral forcing for the SCM and two methods for determining z 0, the time-varying sea-surface roughness length, we conduct four WRF-SCM/DART experiments over the October-December 2006 period. The two methods for determining z 0 are the default Fairall-adjusted Charnock formulation in WRF, and using parameter estimation techniques to estimate z 0 in DART. Using DART to estimate z 0 is found to reduce 1-h forecast errors of wind speed over the Charnock-Fairall z 0 ensembles by 4%–22%. Finally, however, parameter estimation of z 0 does not simultaneously reduce turbulent flux forecast errors, indicating limitations of this approach and the need for new marine ABL parameterizations.« less

  9. syris: a flexible and efficient framework for X-ray imaging experiments simulation.

    PubMed

    Faragó, Tomáš; Mikulík, Petr; Ershov, Alexey; Vogelgesang, Matthias; Hänschke, Daniel; Baumbach, Tilo

    2017-11-01

    An open-source framework for conducting a broad range of virtual X-ray imaging experiments, syris, is presented. The simulated wavefield created by a source propagates through an arbitrary number of objects until it reaches a detector. The objects in the light path and the source are time-dependent, which enables simulations of dynamic experiments, e.g. four-dimensional time-resolved tomography and laminography. The high-level interface of syris is written in Python and its modularity makes the framework very flexible. The computationally demanding parts behind this interface are implemented in OpenCL, which enables fast calculations on modern graphics processing units. The combination of flexibility and speed opens new possibilities for studying novel imaging methods and systematic search of optimal combinations of measurement conditions and data processing parameters. This can help to increase the success rates and efficiency of valuable synchrotron beam time. To demonstrate the capabilities of the framework, various experiments have been simulated and compared with real data. To show the use case of measurement and data processing parameter optimization based on simulation, a virtual counterpart of a high-speed radiography experiment was created and the simulated data were used to select a suitable motion estimation algorithm; one of its parameters was optimized in order to achieve the best motion estimation accuracy when applied on the real data. syris was also used to simulate tomographic data sets under various imaging conditions which impact the tomographic reconstruction accuracy, and it is shown how the accuracy may guide the selection of imaging conditions for particular use cases.

  10. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  11. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  12. Flow Glottogram Characteristics and Perceived Degree of Phonatory Pressedness.

    PubMed

    Millgård, Moa; Fors, Tobias; Sundberg, Johan

    2016-05-01

    Phonatory pressedness is a clinically relevant aspect of voice, which generally is analyzed by auditory perception. The present investigation aimed at identifying voice source and formant characteristics related to experts' ratings of phonatory pressedness. Experimental study of the relations between visual analog scale ratings of phonatory pressedness and voice source parameters in healthy voices. Audio, electroglottogram, and subglottal pressure, estimated from oral pressure during /p/ occlusion, were recorded from five female and six male subjects, each of whom deliberately varied phonation type between neutral, flow, and pressed in the syllable /pae/, produced at three loudness levels and three pitches. Speech-language pathologists rated, along a visual analog scale, the degree of perceived phonatory pressedness in these samples. The samples were analyzed by means of inverse filtering with regard to closed quotient, dominance of the voice source fundamental, normalized amplitude quotient, peak-to-peak flow amplitude, as well as formant frequencies and the alpha ratio of spectrum energy above and below 1000 Hz. The results were compared with the rating data, which showed that the ratings were closely related to voice source parameters. Approximately, 70% of the variance of the ratings could be explained by the voice source parameters. A multiple linear regression analysis suggested that perceived phonatory pressedness is related most closely to subglottal pressure, closed quotient, and the two lowest formants. Copyright © 2016 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  13. Source parameters and moment tensor of the ML 4.6 earthquake of November 19, 2011, southwest Sharm El-Sheikh, Egypt

    NASA Astrophysics Data System (ADS)

    Mohamed, Gad-Elkareem Abdrabou; Omar, Khaled

    2014-06-01

    The southern part of the Gulf of Suez is one of the most seismically active areas in Egypt. On Saturday November 19, 2011 at 07:12:15 (GMT) an earthquake of ML 4.6 occurred in southwest Sharm El-Sheikh, Egypt. The quake has been felt at Sharm El-Sheikh city while no casualties were reported. The instrumental epicenter is located at 27.69°N and 34.06°E. Seismic moment is 1.47 E+22 dyne cm, corresponding to a moment magnitude Mw 4.1. Following a Brune model, the source radius is 101.36 m with an average dislocation of 0.015 cm and a 0.06 MPa stress drop. The source mechanism from a fault plane solution shows a normal fault, the actual fault plane is strike 358, dip 34 and rake -60, the computer code ISOLA is used. Twenty seven small and micro earthquakes (1.5 ⩽ ML ⩽ 4.2) were also recorded by the Egyptian National Seismological Network (ENSN) from the same region. We estimate the source parameters for these earthquakes using displacement spectra. The obtained source parameters include seismic moments of 2.77E+16-1.47E+22 dyne cm, stress drops of 0.0005-0.0617 MPa and relative displacement of 0.0001-0.0152 cm.

  14. As above, so below? Towards understanding inverse models in BCI

    NASA Astrophysics Data System (ADS)

    Lindgren, Jussi T.

    2018-02-01

    Objective. In brain-computer interfaces (BCI), measurements of the user’s brain activity are classified into commands for the computer. With EEG-based BCIs, the origins of the classified phenomena are often considered to be spatially localized in the cortical volume and mixed in the EEG. We investigate if more accurate BCIs can be obtained by reconstructing the source activities in the volume. Approach. We contrast the physiology-driven source reconstruction with data-driven representations obtained by statistical machine learning. We explain these approaches in a common linear dictionary framework and review the different ways to obtain the dictionary parameters. We consider the effect of source reconstruction on some major difficulties in BCI classification, namely information loss, feature selection and nonstationarity of the EEG. Main results. Our analysis suggests that the approaches differ mainly in their parameter estimation. Physiological source reconstruction may thus be expected to improve BCI accuracy if machine learning is not used or where it produces less optimal parameters. We argue that the considered difficulties of surface EEG classification can remain in the reconstructed volume and that data-driven techniques are still necessary. Finally, we provide some suggestions for comparing approaches. Significance. The present work illustrates the relationships between source reconstruction and machine learning-based approaches for EEG data representation. The provided analysis and discussion should help in understanding, applying, comparing and improving such techniques in the future.

  15. An improved Rosetta pedotransfer function and evaluation in earth system models

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Schaap, M. G.

    2017-12-01

    Soil hydraulic parameters are often difficult and expensive to measure, leading to the pedotransfer functions (PTFs) an alternative to predict those parameters. Rosetta (Schaap et al., 2001, denoted as Rosetta1) are widely used PTFs, which is based on artificial neural network (ANN) analysis coupled with the bootstrap re-sampling method, allowing the estimation of van Genuchten water retention parameters (van Genuchten, 1980, abbreviated here as VG), saturated hydraulic conductivity (Ks), as well as their uncertainties. We present an improved hierarchical pedotransfer functions (Rosetta3) that unify the VG water retention and Ks submodels into one, thus allowing the estimation of uni-variate and bi-variate probability distributions of estimated parameters. Results show that the estimation bias of moisture content was reduced significantly. Rosetta1 and Posetta3 were implemented in the python programming language, and the source code are available online. Based on different soil water retention equations, there are diverse PTFs used in different disciplines of earth system modelings. PTFs based on Campbell [1974] or Clapp and Hornberger [1978] are frequently used in land surface models and general circulation models, while van Genuchten [1980] based PTFs are more widely used in hydrology and soil sciences. We use an independent global scale soil database to evaluate the performance of diverse PTFs used in different disciplines of earth system modelings. PTFs are evaluated based on different soil characteristics and environmental characteristics, such as soil textural data, soil organic carbon, soil pH, as well as precipitation and soil temperature. This analysis provides more quantitative estimation error information for PTF predictions in different disciplines of earth system modelings.

  16. Online estimation of the wavefront outer scale profile from adaptive optics telemetry

    NASA Astrophysics Data System (ADS)

    Guesalaga, A.; Neichel, B.; Correia, C. M.; Butterley, T.; Osborn, J.; Masciadri, E.; Fusco, T.; Sauvage, J.-F.

    2017-02-01

    We describe an online method to estimate the wavefront outer scale profile, L0(h), for very large and future extremely large telescopes. The stratified information on this parameter impacts the estimation of the main turbulence parameters [turbulence strength, Cn2(h); Fried's parameter, r0; isoplanatic angle, θ0; and coherence time, τ0) and determines the performance of wide-field adaptive optics (AO) systems. This technique estimates L0(h) using data from the AO loop available at the facility instruments by constructing the cross-correlation functions of the slopes between two or more wavefront sensors, which are later fitted to a linear combination of the simulated theoretical layers having different altitudes and outer scale values. We analyse some limitations found in the estimation process: (I) its insensitivity to large values of L0(h) as the telescope becomes blind to outer scales larger than its diameter; (II) the maximum number of observable layers given the limited number of independent inputs that the cross-correlation functions provide and (III) the minimum length of data required for a satisfactory convergence of the turbulence parameters without breaking the assumption of statistical stationarity of the turbulence. The method is applied to the Gemini South multiconjugate AO system that comprises five wavefront sensors and two deformable mirrors. Statistics of L0(h) at Cerro Pachón from data acquired during 3 yr of campaigns show interesting resemblance to other independent results in the literature. A final analysis suggests that the impact of error sources will be substantially reduced in instruments of the next generation of giant telescopes.

  17. Adaptive MCMC in Bayesian phylogenetics: an application to analyzing partitioned data in BEAST.

    PubMed

    Baele, Guy; Lemey, Philippe; Rambaut, Andrew; Suchard, Marc A

    2017-06-15

    Advances in sequencing technology continue to deliver increasingly large molecular sequence datasets that are often heavily partitioned in order to accurately model the underlying evolutionary processes. In phylogenetic analyses, partitioning strategies involve estimating conditionally independent models of molecular evolution for different genes and different positions within those genes, requiring a large number of evolutionary parameters that have to be estimated, leading to an increased computational burden for such analyses. The past two decades have also seen the rise of multi-core processors, both in the central processing unit (CPU) and Graphics processing unit processor markets, enabling massively parallel computations that are not yet fully exploited by many software packages for multipartite analyses. We here propose a Markov chain Monte Carlo (MCMC) approach using an adaptive multivariate transition kernel to estimate in parallel a large number of parameters, split across partitioned data, by exploiting multi-core processing. Across several real-world examples, we demonstrate that our approach enables the estimation of these multipartite parameters more efficiently than standard approaches that typically use a mixture of univariate transition kernels. In one case, when estimating the relative rate parameter of the non-coding partition in a heterochronous dataset, MCMC integration efficiency improves by > 14-fold. Our implementation is part of the BEAST code base, a widely used open source software package to perform Bayesian phylogenetic inference. guy.baele@kuleuven.be. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. Uncertainty for calculating transport on Titan: A probabilistic description of bimolecular diffusion parameters

    NASA Astrophysics Data System (ADS)

    Plessis, S.; McDougall, D.; Mandt, K.; Greathouse, T.; Luspay-Kuti, A.

    2015-11-01

    Bimolecular diffusion coefficients are important parameters used by atmospheric models to calculate altitude profiles of minor constituents in an atmosphere. Unfortunately, laboratory measurements of these coefficients were never conducted at temperature conditions relevant to the atmosphere of Titan. Here we conduct a detailed uncertainty analysis of the bimolecular diffusion coefficient parameters as applied to Titan's upper atmosphere to provide a better understanding of the impact of uncertainty for this parameter on models. Because temperature and pressure conditions are much lower than the laboratory conditions in which bimolecular diffusion parameters were measured, we apply a Bayesian framework, a problem-agnostic framework, to determine parameter estimates and associated uncertainties. We solve the Bayesian calibration problem using the open-source QUESO library which also performs a propagation of uncertainties in the calibrated parameters to temperature and pressure conditions observed in Titan's upper atmosphere. Our results show that, after propagating uncertainty through the Massman model, the uncertainty in molecular diffusion is highly correlated to temperature and we observe no noticeable correlation with pressure. We propagate the calibrated molecular diffusion estimate and associated uncertainty to obtain an estimate with uncertainty due to bimolecular diffusion for the methane molar fraction as a function of altitude. Results show that the uncertainty in methane abundance due to molecular diffusion is in general small compared to eddy diffusion and the chemical kinetics description. However, methane abundance is most sensitive to uncertainty in molecular diffusion above 1200 km where the errors are nontrivial and could have important implications for scientific research based on diffusion models in this altitude range.

  19. A Simple Model of Global Aerosol Indirect Effects

    NASA Technical Reports Server (NTRS)

    Ghan, Steven J.; Smith, Steven J.; Wang, Minghuai; Zhang, Kai; Pringle, Kirsty; Carslaw, Kenneth; Pierce, Jeffrey; Bauer, Susanne; Adams, Peter

    2013-01-01

    Most estimates of the global mean indirect effect of anthropogenic aerosol on the Earth's energy balance are from simulations by global models of the aerosol lifecycle coupled with global models of clouds and the hydrologic cycle. Extremely simple models have been developed for integrated assessment models, but lack the flexibility to distinguish between primary and secondary sources of aerosol. Here a simple but more physically based model expresses the aerosol indirect effect (AIE) using analytic representations of cloud and aerosol distributions and processes. Although the simple model is able to produce estimates of AIEs that are comparable to those from some global aerosol models using the same global mean aerosol properties, the estimates by the simple model are sensitive to preindustrial cloud condensation nuclei concentration, preindustrial accumulation mode radius, width of the accumulation mode, size of primary particles, cloud thickness, primary and secondary anthropogenic emissions, the fraction of the secondary anthropogenic emissions that accumulates on the coarse mode, the fraction of the secondary mass that forms new particles, and the sensitivity of liquid water path to droplet number concentration. Estimates of present-day AIEs as low as 5 W/sq m and as high as 0.3 W/sq m are obtained for plausible sets of parameter values. Estimates are surprisingly linear in emissions. The estimates depend on parameter values in ways that are consistent with results from detailed global aerosol-climate simulation models, which adds to understanding of the dependence on AIE uncertainty on uncertainty in parameter values.

  20. Autocorrelated residuals in inverse modelling of soil hydrological processes: a reason for concern or something that can safely be ignored?

    NASA Astrophysics Data System (ADS)

    Scharnagl, Benedikt; Durner, Wolfgang

    2013-04-01

    Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these applications, leading to parameter and uncertainty estimates that satisfy all the underlying statistical assumptions. For theoretical reasons, these estimates are deemed more reliable than those estimates based on the neglect of autocorrelation in the residuals. In compliance with theory and results reported in the literature, our results showed that parameter uncertainty bounds were substantially wider if autocorrelation in the residuals was explicitly accounted for, and also the optimal parameter vales were slightly different in this case. We argue that the autoregressive model presented here should be used as a matter of routine in inverse modeling of soil hydrological processes.

Top