-
Dipole excitation of surface plasmon on a conducting sheet: Finite element approximation and validation
NASA Astrophysics Data System (ADS)
Maier, Matthias; Margetis, Dionisios; Luskin, Mitchell
2017-06-01
We formulate and validate a finite element approach to the propagation of a slowly decaying electromagnetic wave, called surface plasmon-polariton, excited along a conducting sheet, e.g., a single-layer graphene sheet, by an electric Hertzian dipole. By using a suitably rescaled form of time-harmonic Maxwell's equations, we derive a variational formulation that enables a direct numerical treatment of the associated class of boundary value problems by appropriate curl-conforming finite elements. The conducting sheet is modeled as an idealized hypersurface with an effective electric conductivity. The requisite weak discontinuity for the tangential magnetic field across the hypersurface can be incorporated naturally into the variational formulation. We carry out numerical simulations for an infinite sheet with constant isotropic conductivity embedded in two spatial dimensions; and validate our numerics against the closed-form exact solution obtained by the Fourier transform in the tangential coordinate. Numerical aspects of our treatment such as an absorbing perfectly matched layer, as well as local refinement and a posteriori error control are discussed.
-
On vital aid: the why, what and how of validation
PubMed Central
Kleywegt, Gerard J.
2009-01-01
Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such techniques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968
-
Decorrelation of the true and estimated classifier errors in high-dimensional settings.
PubMed
Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R
2007-01-01
The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.
-
Multiple-hit parameter estimation in monolithic detectors.
PubMed
Hunter, William C J; Barrett, Harrison H; Lewellen, Tom K; Miyaoka, Robert S
2013-02-01
We examine a maximum-a-posteriori method for estimating the primary interaction position of gamma rays with multiple interaction sites (hits) in a monolithic detector. In assessing the performance of a multiple-hit estimator over that of a conventional one-hit estimator, we consider a few different detector and readout configurations of a 50-mm-wide square cerium-doped lutetium oxyorthosilicate block. For this study, we use simulated data from SCOUT, a Monte-Carlo tool for photon tracking and modeling scintillation- camera output. With this tool, we determine estimate bias and variance for a multiple-hit estimator and compare these with similar metrics for a one-hit maximum-likelihood estimator, which assumes full energy deposition in one hit. We also examine the effect of event filtering on these metrics; for this purpose, we use a likelihood threshold to reject signals that are not likely to have been produced under the assumed likelihood model. Depending on detector design, we observe a 1%-12% improvement of intrinsic resolution for a 1-or-2-hit estimator as compared with a 1-hit estimator. We also observe improved differentiation of photopeak events using a 1-or-2-hit estimator as compared with the 1-hit estimator; more than 6% of photopeak events that were rejected by likelihood filtering for the 1-hit estimator were accurately identified as photopeak events and positioned without loss of resolution by a 1-or-2-hit estimator; for PET, this equates to at least a 12% improvement in coincidence-detection efficiency with likelihood filtering applied.
-
Error estimates for ice discharge calculated using the flux gate approach
NASA Astrophysics Data System (ADS)
Navarro, F. J.; Sánchez Gámez, P.
2017-12-01
Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.
-
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
NASA Astrophysics Data System (ADS)
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.
-
An object correlation and maneuver detection approach for space surveillance
NASA Astrophysics Data System (ADS)
Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong
2012-10-01
Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.
-
Comment on "Inference with minimal Gibbs free energy in information field theory".
PubMed
Iatsenko, D; Stefanovska, A; McClintock, P V E
2012-03-01
Enßlin and Weig [Phys. Rev. E 82, 051112 (2010)] have introduced a "minimum Gibbs free energy" (MGFE) approach for estimation of the mean signal and signal uncertainty in Bayesian inference problems: it aims to combine the maximum a posteriori (MAP) and maximum entropy (ME) principles. We point out, however, that there are some important questions to be clarified before the new approach can be considered fully justified, and therefore able to be used with confidence. In particular, after obtaining a Gaussian approximation to the posterior in terms of the MGFE at some temperature T, this approximation should always be raised to the power of T to yield a reliable estimate. In addition, we show explicitly that MGFE indeed incorporates the MAP principle, as well as the MDI (minimum discrimination information) approach, but not the well-known ME principle of Jaynes [E.T. Jaynes, Phys. Rev. 106, 620 (1957)]. We also illuminate some related issues and resolve apparent discrepancies. Finally, we investigate the performance of MGFE estimation for different values of T, and we discuss the advantages and shortcomings of the approach.
-
Multi-object segmentation using coupled nonparametric shape and relative pose priors
NASA Astrophysics Data System (ADS)
Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep
2009-02-01
We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.
-
Coupled land surface-subsurface hydrogeophysical inverse modeling to estimate soil organic carbon content and explore associated hydrological and thermal dynamics in the Arctic tundra
NASA Astrophysics Data System (ADS)
Phuong Tran, Anh; Dafflon, Baptiste; Hubbard, Susan S.
2017-09-01
Quantitative characterization of soil organic carbon (OC) content is essential due to its significant impacts on surface-subsurface hydrological-thermal processes and microbial decomposition of OC, which both in turn are important for predicting carbon-climate feedbacks. While such quantification is particularly important in the vulnerable organic-rich Arctic region, it is challenging to achieve due to the general limitations of conventional core sampling and analysis methods, and to the extremely dynamic nature of hydrological-thermal processes associated with annual freeze-thaw events. In this study, we develop and test an inversion scheme that can flexibly use single or multiple datasets - including soil liquid water content, temperature and electrical resistivity tomography (ERT) data - to estimate the vertical distribution of OC content. Our approach relies on the fact that OC content strongly influences soil hydrological-thermal parameters and, therefore, indirectly controls the spatiotemporal dynamics of soil liquid water content, temperature and their correlated electrical resistivity. We employ the Community Land Model to simulate nonisothermal surface-subsurface hydrological dynamics from the bedrock to the top of canopy, with consideration of land surface processes (e.g., solar radiation balance, evapotranspiration, snow accumulation and melting) and ice-liquid water phase transitions. For inversion, we combine a deterministic and an adaptive Markov chain Monte Carlo (MCMC) optimization algorithm to estimate a posteriori distributions of desired model parameters. For hydrological-thermal-to-geophysical variable transformation, the simulated subsurface temperature, liquid water content and ice content are explicitly linked to soil electrical resistivity via petrophysical and geophysical models. We validate the developed scheme using different numerical experiments and evaluate the influence of measurement errors and benefit of joint inversion on the estimation of OC and other parameters. We also quantify the propagation of uncertainty from the estimated parameters to prediction of hydrological-thermal responses. We find that, compared to inversion of single dataset (temperature, liquid water content or apparent resistivity), joint inversion of these datasets significantly reduces parameter uncertainty. We find that the joint inversion approach is able to estimate OC and sand content within the shallow active layer (top 0.3 m of soil) with high reliability. Due to the small variations of temperature and moisture within the shallow permafrost (here at about 0.6 m depth), the approach is unable to estimate OC with confidence. However, if the soil porosity is functionally related to the OC and mineral content, which is often observed in organic-rich Arctic soil, the uncertainty of OC estimate at this depth remarkably decreases. Our study documents the value of the new surface-subsurface, deterministic-stochastic inversion approach, as well as the benefit of including multiple types of data to estimate OC and associated hydrological-thermal dynamics.
-
Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?
PubMed
Staatz, Christine E; Tett, Susan E
2011-12-01
This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared with a later full measured AUC but rather with a further AUC estimate based on a second Bayesian analysis. This study also provided some evidence that a useful monitoring schedule for MPA AUC following adult renal transplant would be every 2 weeks during the first month post-transplant, every 1-3 months between months 1 and 12, and each year thereafter. It will be interesting to see further validations in different patient groups using the free website service. In summary, the predictive performance of Bayesian estimation of MPA, comparing estimated with measured AUC values, has been reported in several studies. However, the next step of predicting dosages based on these Bayesian-estimated AUCs, and prospectively determining how closely these predicted dosages give drug exposure matching targeted AUCs, remains largely unaddressed. Further prospective studies are required, particularly in non-renal transplant patients and with the EC-MPS formulation. Other important questions remain to be answered, such as: do Bayesian forecasting methods devised to date use the best population pharmacokinetic models or most accurate algorithms; are the methods simple to use for routine clinical practice; do the algorithms actually improve dosage estimations beyond empirical recommendations in all groups that receive MPA therapy; and, importantly, do the dosage predictions, when followed, improve patient health outcomes?
-
Precision Geodesy via Radio Interferometry.
PubMed
Hinteregger, H F; Shapiro, I I; Robertson, D S; Knight, C A; Ergas, R A; Whitney, A R; Rogers, A E; Moran, J M; Clark, T A; Burke, B F
1972-10-27
Very-long-baseline interferometry experiments, involving observations of extragalactic radio sources, were performed in 1969 to determine the vector separations between antenna sites in Massachusetts and West Virginia. The 845.130-kilometer baseline was estimated from two separate experiments. The results agreed with each other to within 2 meters in all three components and with a special geodetic survey to within 2 meters in length; the differences in baseline direction as determined by the survey and by interferometry corresponded to discrepancies of about 5 meters. The experiments also yielded positions for nine extragalactic radio sources, most to within 1 arc second, and allowed the hydrogen maser clocks at the two sites to be synchronized a posteriori with an uncertainty of only a few nanoseconds.
-
A procedure for testing the quality of LANDSAT atmospheric correction algorithms
NASA Technical Reports Server (NTRS)
Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.
1982-01-01
There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.
-
On a full Bayesian inference for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.
-
Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.
PubMed
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
-
Confidence level estimation in multi-target classification problems
NASA Astrophysics Data System (ADS)
Chang, Shi; Isaacs, Jason; Fu, Bo; Shin, Jaejeong; Zhu, Pingping; Ferrari, Silvia
2018-04-01
This paper presents an approach for estimating the confidence level in automatic multi-target classification performed by an imaging sensor on an unmanned vehicle. An automatic target recognition algorithm comprised of a deep convolutional neural network in series with a support vector machine classifier detects and classifies targets based on the image matrix. The joint posterior probability mass function of target class, features, and classification estimates is learned from labeled data, and recursively updated as additional images become available. Based on the learned joint probability mass function, the approach presented in this paper predicts the expected confidence level of future target classifications, prior to obtaining new images. The proposed approach is tested with a set of simulated sonar image data. The numerical results show that the estimated confidence level provides a close approximation to the actual confidence level value determined a posteriori, i.e. after the new image is obtained by the on-board sensor. Therefore, the expected confidence level function presented in this paper can be used to adaptively plan the path of the unmanned vehicle so as to optimize the expected confidence levels and ensure that all targets are classified with satisfactory confidence after the path is executed.
-
Estimating the periodic components of a biomedical signal through inverse problem modelling and Bayesian inference with sparsity enforcing prior
NASA Astrophysics Data System (ADS)
Dumitru, Mircea; Djafari, Ali-Mohammad
2015-01-01
The recent developments in chronobiology need a periodic components variation analysis for the signals expressing the biological rhythms. A precise estimation of the periodic components vector is required. The classical approaches, based on FFT methods, are inefficient considering the particularities of the data (short length). In this paper we propose a new method, using the sparsity prior information (reduced number of non-zero values components). The considered law is the Student-t distribution, viewed as a marginal distribution of a Infinite Gaussian Scale Mixture (IGSM) defined via a hidden variable representing the inverse variances and modelled as a Gamma Distribution. The hyperparameters are modelled using the conjugate priors, i.e. using Inverse Gamma Distributions. The expression of the joint posterior law of the unknown periodic components vector, hidden variables and hyperparameters is obtained and then the unknowns are estimated via Joint Maximum A Posteriori (JMAP) and Posterior Mean (PM). For the PM estimator, the expression of the posterior law is approximated by a separable one, via the Bayesian Variational Approximation (BVA), using the Kullback-Leibler (KL) divergence. Finally we show the results on synthetic data in cancer treatment applications.
-
Bayesian Inference for Generalized Linear Models for Spiking Neurons
PubMed Central
Gerwinn, Sebastian; Macke, Jakob H.; Bethge, Matthias
2010-01-01
Generalized Linear Models (GLMs) are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate. PMID:20577627
-
Water level observations from Unmanned Aerial Vehicles (UAVs) for improving probabilistic estimations of interaction between rivers and groundwater
NASA Astrophysics Data System (ADS)
Bandini, Filippo; Butts, Michael; Vammen Jacobsen, Torsten; Bauer-Gottwein, Peter
2016-04-01
Integrated hydrological models are generally calibrated against observations of river discharge and piezometric head in groundwater aquifers. Integrated hydrological models are rarely calibrated against spatially distributed water level observations measured by either in-situ stations or spaceborne platforms. Indeed in-situ observations derived from ground-based stations are generally spaced too far apart to capture spatial patterns in the water surface. On the other hand spaceborne observations have limited spatial resolution. Additionally satellite observations have a temporal resolution which is not ideal for observing the temporal patterns of the hydrological variables during extreme events. UAVs (Unmanned Aerial Vehicles) offer several advantages: i) high spatial resolution; ii) tracking of the water body better than any satellite technology; iii) timing of the sampling merely depending on the operators. In this case study the Mølleåen river (Denmark) and its catchment have been simulated through an integrated hydrological model (MIKE 11-MIKE SHE). This model was initially calibrated against observations of river discharge retrieved by in-situ stations and against piezometric head of the aquifers. Subsequently the hydrological model has been calibrated against dense spatially distributed water level observations, which could potentially be retrieved by UAVs. Error characteristics of synthetic UAV water level observations were taken from a recent proof-of-concept study. Since the technology for ranging water level is under development, UAV synthetic water level observations were extracted from another model of the river with higher spatial resolution (cross sections located every 10 m). This model with high resolution is assumed to be absolute truth for the purpose of this work. The river model with the coarser resolution has been calibrated against the synthetic water level observations through Differential Evolution Adaptive Metropolis (DREAM) algorithm, an efficient global Markov Chain Monte Carlo (MCMC) in high-dimensional spaces. Calibration against water level has demonstrated a significant improvement of the estimation of the exchange flow between groundwater and river branch. Groundwater flux and direction are now better simulated. Reliability and sharpness of the probabilistic forecasts are assessed with the sharpness, the interval skill score (ISS) of the 95{%} confidence interval, and with the root mean square error (RMSE) of the maximum a posteriori probability (MAP). The binary outcome (either gaining or loosing stream) of the flow direction is assessed with Brier score (BS). After water level calibration the sharpness of the estimations is approximately doubled with respect to the model calibrated only against discharge, ISS has improved from 2.4-7to 7.8-8 m^3/s\\cdot m, RMSE from 9.2-8 to 2.4-8 m^3/s\\cdot m^and BS is halved from 0.58 to 0.25.
-
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.
PubMed
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-11-18
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.